This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employment rate gap:
This report from Banque de France has it that "the EU-US gap has narrowed in terms of hours worked per capita but has widened in terms of GDP per hours worked", and that in France at least this can be attributed to "producers and heavy users of IT technologies":
The Draghi report says 72% of the EU-US GDP per capita gap is due to productivity, and only 28% is due to labour hours:
Part of the discrepancy may be that the OWID data only goes until 2019, whereas some of these other sources report that the gap has widened significantly since COVID? But that doesn't seem to be the case in the first plot above (it still shows a widening gap before COVID).
Or maybe most of the difference is due to comparing the US to France/Germany, versus also including countries like Greece and Italy that have seen much slower productivity growth. But that doesn't explain the France data above (it still shows a gap between France and the US, even before COVID).
Thanks for this. I already had some sense that historical productivity data varied, but this prompted me to look at how large those differences are and they are bigger than I realised. I made an edit to my original comment.
TL;DR: Current productivity people mostly agree about. Historical productivity they do not. Some sources, including those in the previous comment, think Germany was more productive than the US in the past, which makes being less productive now more damning compared to a perspective where this has always been the case.
***
For simplicity I'm going to focus on US vs. Germany in the first three bullets:
I struggle to find sources claiming a large gap in current productivity between the US and Germany.
However, historical productivity estimates vary much more significantly.
Some, like Our World in Data, show German productivity below US productivity near-continuously, e.g. 6% lower in 1995.
Some sources claim Germany had higher productivity 20+ years ago, and combined with the slightly lower present productivity that everybody agrees on that can imply noticeably lower growth:
If you look at the charts in the Banque France report you can see an example of this, with German productivity given as 3% higher in 2000.
A striking example is the OECD, which puts German productivity >10% higher in 1995.[2]
The Bergeund report, which is also the source for a chart from Draghi's report - shown below - has around 8% higher German productivity in 1995.
So there are two competing stories, which I don't know how to adjudicate between:
German productivity used to be slightly higher than US productivity, but is now slightly lower.
German productivity has always been slightly lower than US productivity.
German productivity is higher than EU productivity as a whole, by a factor of roughly 1.15x.
So you can substitute 'slightly higher' for 'slightly lower' and 'slightly lower' for 'significantly lower' in (3), producing two parallel competing stories for the EU as a whole.
This link, with chart copied below as 'Figure 3', is an example of the 'EU productivity has always been significantly lower than US productivity' story.
Note also that this chart agrees with Bergeund/Draghi - chart also below - about the current state; they both show EU-wide productivity at around 82% of US productivity in the present day. It's the 90s that they sharply disagree about, where one graph shows 74% and the other shows 95%, almost a 1.3x gap.
****
Where does that leave the conversation about European regulation? This is just my $0.02, but:
In my opinion the large divergences of opinion about the 90s, while academically interesting, are only indirectly relevant to the situation today. The situation today seems broadly accepted to be as follows:
Western and Northern EU countries - Germany, Austria, France, Netherlands, Belgium, Luxembourg, Denmark, Norway, Sweden - have very similar productivity to the US.
Eastern EU countries - mostly ex-USSR countries - have much lower productivity, but are catching up fast.
Southern EU countries, e.g. Italy/Spain/Greece, have been languishing.
(Incidentally, so has the UK. I started to look at this when trying to understand how UK productivity compared to other countries, and was surprised to learn that the gap vs. nearby European countries was very similar to the gap vs. the US.)
Creating an average productivity of around 82% of the US level
I think that when Americans think about European regulations, they are mostly thinking about the Western and Northern countries. For example, when I ask Claude which EU countries have the strongest labour rights, the list of countries it gives me is entirely a subset of those countries. But unless you think replacing those regulations with US-style regulations would allow German productivity to significantly exceed US productivity, any claim that this would close the GDP per capita gap between the US and Germany - around 1.2x - without more hours being worked is not very reasonable. Let alone the GDP gap, which layers on the US' higher population growth.
Digging into Southern Europe and figuring out why e.g. Italy and Germany have failed to converge seems a lot more reasonable. Maybe regulation is part of that story. I don't know.
So I land pretty much where the Economist article is, which is why I quoted it:
But in aggregate, western Europeans get just as much out of their labour as Americans do. Narrowing the gap in total GDP would require additional working hours, either via immigration or by raising the amount of time citizens spend on the job.
I am eyeballing at page 66 and adding together 'TFP' and 'capital deepening' factors. I think that amounts to labour productivity, and indeed the report does say "labour productivity...ie the product of TFP and capital deepening". Less confident about this than the other figures though.
Unhelpfully, the data is displayed as % of 2015 productivity. I'm getting my claim from (a) OECD putting German 1995 productivity at 80% of 2015 levels, vs. the US being at 70% of 2015 levels and (b) 2022 productivity being 107% vs. 106% of 2022 levels. Given the OECD has 2022 US/German productivity virtually identical, I think the forced implication is that they think German productivity was >10% higher in 1995.
Thanks for the great clarifications, Lauren! Strongly upvoted.
Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
Interesting example! I would be interested to know more, but I understand it may be sensible information to share publicly. I think one can help 400 M shrimp donating 26.7 k$ (= 400*10^6/(15*10^3)) to the Shrimp Welfare Project (SWP). So, if your example was representative of the impact of a career in policy inside the system, and the impact per animal helped in your example matched that of SWP (which I estimated to be 0.0426 DALYs averted), maximising donations could still be better. For a career of 40 years, one would only need to donate 668 $ (= 26.7*10^3/40) more to SWP per year relative to the career in policy inside the system.
To quickly add on to what Toby wrote: the CEA Online Team has also been redesigning effectivealtruism.org and we expect to soft launch it soon. I post quick takes when we update our half-quarterly plans, so you can follow along there. :)
Hey! I'm the current staff-member working on the EA Newsletter - and I'm currently working on the EA Newsletter improvement project we didn't have time for before. So far this has been:
Making the EA Newsletter sign-up box more prominent in a few places (EA.org and CEA.org) + adding a link to the EA reddit side-panel (surprisingly big community).
Making the sign-up flow single-opt-in.
Designing better metrics to track impact and growth.
Re-writing the intro email campaign people get when they sign up and A/B testing it - this started recently so no new findings yet, but we should have info to improve it at the end of the month.
The next step is more seriously thinking about marketing, considering advertising it, integrating it more with other CEA touchpoints etc... Stay tuned.
Also, I always welcome any suggestions for low-hanging fruit in Newsletter marketing (I'm sure there is a lot of this), as well as general feedback on the Newsletter itself.
Thanks for the reply Toby! These seem like great steps to be taking, and Iâm glad theyâre in the works.
Since you ask about suggestions, here are some other things Iâd be looking at if I were in your shoes.
Working with campus groups to solicit subscriptions. Organizers at Middlebury, a very small school, just reported creating 80 GWWC trial pledges through tabling. Presumably they could garner much higher numbers if they were asking for subscriptions rather than donations.
The total subscriber count has been falling since FTX. I suggest digging into the data on unsubscribers to learn more about this cohort. When did they subscribe? Were they previously engaging with the newsletter or does it look like people just unsubscribing from something they never looked at in the first place? I think this could provide a valuable data point regarding community retention/attrition, and I hope other projects (e.g. the forum team) would undergo a similar exercise.
There are currently ~60k subscribers, and approximately half of them joined in the short window between June 2016 and February 2017. This was obviously a period of aggressive outreach for the newsletter. The obvious question is: was it worthwhile? Presumably a lot of these folks never engaged with the newsletter or unsubscribed. But if a decent percentage of people who subscribed as a result of the more aggressive marketing went on to behave similarly to ânormalâ subscribers, that has big implications for the newsletter and other EA outreach activities.
1. Does the EA Forum support <details> / <summary> blocks, for hidden content? If so, I think that should heavily be used in these summaries. 2. If (1) is done, then I'd like sections like: - related materials - key potential counter-claims - basic evaluations, using some table.
Then, it would be neat if the full prompt for this was online, and maybe if there could be discussion about it.
Of course, even better would be systems where these summaries could be individualized or something, but that would be more expensive.
Executive summary: The author shares how they introduced Effective Altruism (EA) to friends unfamiliar with the movement by explaining its core ideas, personal impact, and diverse community, encouraging more open conversations and engagement with EA.
Key points:
After attending EA Global conferences and wearing EA-branded clothing, the author received unexpected interest, prompting them to write a public explainer for friends unfamiliar with EA.
The post introduces EA through two central questions: âHow do we know weâre doing good?â and âHow do we do good better?â, emphasizing evidence-based charity evaluation and moral impartiality.
The author outlines EAâs roots in cost-effectiveness (e.g., global health interventions like anti-malaria nets) and moral philosophy (e.g., valuing all lives equally, longtermism).
Examples are given of EA-aligned actionsâsuch as kidney donation, pandemic prevention, AI safety, and global health careersâsome of which the author or their friends pursue.
The author highlights the diversity and global reach of the EA community, describing it as ambitious, nerdy, kind, and open to critique.
They encourage others to explore EA via recommended resources (like 80,000 Hours and local groups) and offer to have personal conversations to make the ideas more accessible.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post offers a practical framework for critically evaluating advice by assessing the advice giverâs awareness, experience, and intention, especially when navigating uncertainty or crises where poor advice can have outsized negative consequences.
Key points:
Not all advice should be followedâits usefulness depends on how well it matches your situation, which requires assessing the advice giverâs awareness of your context, relevant experience, and underlying intentions.
Emotional statesâboth yours and the advice giverâsâcan bias how advice is given, received, and interpreted; recognizing this can improve judgment.
Advice may be less applicable if your background or goals differ significantly from common expectations, especially if you are on a non-standard or trailblazing path.
Crisis situations make good advice both more essential and harder to evaluate, due to limited resources, higher risk, and greater emotional influence.
When overwhelmed, prioritizing which advice to evaluate deeply, especially unsolicited advice, helps preserve mental bandwidth while still benefiting from support.
Ultimately, even meta-advice (like this post) should be critically assessed using the same framework; reasoning behind advice may be more valuable than the advice itself.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I have no immediate or useful feedback on this specific question, but just wanted to say that I'll be starting as an economics professor in the Fall at Middlebury. I'd be excited to meet and engage with y'all! If any of your identified bottlenecks are something a faculty member would be able to help with, keep me in mind :)
What 80k programmes will be delivering in the near-term
In response to questions that we and CEA have received about how, and to what extent, our programme delivery will change as a result of our new strategic focus, we wanted to give a tentative indication of our programmeâs plans over the coming months.
The following is our current guess of what weâre going to be doing in the short term. Itâs quite zoomed in on the things that are or arenât changing as a result of our strategic update, rather than going into detail on: a) what things weâve decided not to prioritise, even though we think theyâd be valuable for others to work on; b) things which arenât affected by our strategy very much (such as our operations functions).
Itâs also written in the context of 80k still thinking through our plans â so weâre not able (or trying) to give a firm commitment of what weâll definitely do or not do. Despite our uncertainty, we thought itâd be useful to share the tentative plans that we have here â so that people considering what to work on or whether to recommend 80kâs resources have an idea what to expect from us.
~
To be clear, we think it's an unspeakable travesty that we live in a world where there is so much preventable suffering and death going unaddressed. The following is a concise statement of our priorities, but should not be taken as an indication that we think itâs anything other than a tragedy that so much triage is needed.
We would love it if our programmes could continue to deliver resources focusing on a wider breadth of impactful cause areas, but we think unfortunately the situation with AI is severe and urgent enough that we need to prioritise using our capacity to help with it.
In writing this, we hope that we can help others to figure out where the gaps left by 80k are likely to be, so that they are easier to fill â and to also understand how 80k might still be useful to them / their groups.
~
Web
User flow â Historically, and in our upcoming plans, our site user flow takes new users to the career guide â a primarily a principles-first framing on impactful careers. We expect to keep this user flow for the immediate future, though we might:
Update the guide to bring AI safety up sooner / more prominently (though we overall expect it to remain a principles-first resource)
Introduce a second user flow, targeting users who reached 80k with an existing interest in helping AI to go well.
Broad site framing â Weâre currently planning a project of updating our site to reflect more âfront-and-centreâ our prioritisation of AI and the urgency we think should be afforded to it. That said, we expect to maintain our overall âimpactful careersâ high-level focus and as the initial framing people encounter when reaching the site via our front page for the first time. We continue to view EA principles as important for pursuing high impact careers including in AI safety and policy, so plan to continue to highlight them.
New publications â Going forward, weâre planning to increase the proportion of new content that focus on AI safety relevant topics. To do this at the standard weâd like to, weâll need to stop writing new content on non-AI-safety content.
As mentioned in our post, we think the topics that are relevant here are ârelatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurityâ.
Existing content â As mentioned, we plan for our existing web content to remain accessible to users, though non-AI topics will not be featured or promoted as prominently in the future.
Podcast
We expect ~80% of our podcast episodes to be focused on AI. In the last 2 years, ~40% of our main-feed content has been AI focused.
As you might have seen, the podcast team is also hoping to hire another host and a chief of staff to scale up the team to allow them to more comprehensively cover AGI developments, risks, and governance.
Advising
Broadly speaking, who our advisors speak to isnât going to change very much (though our bar might raise somewhat). For the last few years, weâve already been accepting advisees on the basis of their interest in working on our top pressing problems, (especially mitigating risks from AI, as described here), and refer promising applicants who are interested in an area we have less expertise in to other services / resources / connections.
âWe still plan to talk to people considering work on any of our top problems (which includes animal welfare), and I believe we still have a lot of useful advice on how to pursue careers in these areas.
However, we will be applying a higher bar to applicants that arenât primarily interested in working on AI.â
Job board
Along with slightly raising our bar for jobs not related to AI safety, weâll be moving to more automated curation of global health and development, climate change, and animal welfare roles, so that we can spend more of our human-curation time on AI and relevant areas. This means that weâll be relying more on external evaluators like GiveWell, meaning that our coverage might be worse in areas where good evaluators donât exist. Overall, we'll continue to list roles in these areas, but likely fewer than before.
Headhunting
Our headhunting service has historically been AI-focused due to capacity constraints, and will continue to be.
Video
Our video programme is new, and weâre still in the process of establishing its strategy. In general, we do expect it to focus on topics relevant to making AGI go well.
Hey Cullen, thanks for responding! So I think there are object-level and meta-level thoughts here, and I was just using Jeremy as a stand-in for the polarisation of Open Source vs AI Safety more generally.
Object Level - I don't want to spend too long here as it's not the direct focus of Richard's OP. Some points:
On 'elite panic' and 'counter-enlightenment', he's not directly comparing FAIR to it I think. He's saying that previous attempts to avoid democratisation of power in the Enlightenment tradition have had these flaws. I do agree that it is escalatory though.
I think, from Jeremy's PoV, that centralization of power is the actual ballgame and what Frontier AI Regulation should be about. So one mention on page 31 probably isn't good enough for him. That's a fine reaction to me, just as it's fine for you and Marcus to disagree on the relative costs/benefits and write the FAIR paper the way you did.
On the actual points though, I actually went back and skim-listened to the the webinar on the paper in July 2023, which Jeremy (and you!) participated in, and man I am so much more receptive and sympathetic to his position now than I was back then, and I don't really find Marcus and you to be that convincing in rebuttal, but as I say I only did a quick skim listen so I hold that opinion very lightly.
Meta Level -
On the 'escalation' in the blog post, maybe his mind has hardened over the year? There's probably a difference between ~July23-Jeremy and ~Nov23Jeremy, which he may view as an escalation from the AI Safety Side to double down on these kind of legislative proposals? While it's before SB1047, I see Wiener had introduced an earlier intent bill in September 2023.
I agree that "people are mad at us, we're doing something wrong" isn't a guaranteed logic proof, but as you say it's a good prompt to think "should i have done something different?", and (not saying you're doing this) I think the absolutely disaster zone that was the sB1047 debate and discourse can't be fully attributed to e/acc or a16z or something. I think the backlash I've seen to the AI Safety/x-risk/EA memeplex over the last few years should prompt anyone in these communities, especially those trying to influence policy of the world's most powerful state, to really consider Cromwell's rule.
On this "you will just in fact have pro-OS people mad at you, no matter how nicely your white papers are written." I think there's some sense in which it's true, but I think that there's a lot of contigency about just how mad people get, how mad they get, and whether other allies could have been made on the way. I think one of the reasons they got so bad is because previous work on AI Safety has understimated the socio-political sides of Alignment and Regulation.[1]
I have a lot to say about this, much of which boils down to a two points:
I don't think Jeremy is a good example of unnecessary polarization.
I think "avoid unnecessary polarization" is a bad heuristic for policy research (which, related to my first point, is what Jeremy was responding to in Dislightenment), at least if it means anything other than practicing the traditional academic virtues of acknowledging limitations, noting contrary opinion, being polite, being willing to update, inviting disagreement, etc.
The rest of your comment I agree with.
I realize that point (1) may seem like nitpicking, and that I am also emotionally invested in it for various reasons. But this is all in the spirit of something like avoiding reasoning from fictional evidence: if we want to have a good discussion of avoiding unnecessary polarization, we should reason from clear examples of it. If Jeremy is not a good example of it, we should not use him as a stand-in.
I was just using Jeremy as a stand-in for the polarisation of Open Source vs AI Safety more generally.
Right, this is in large part where our disagreement is: whether Jeremy is good evidence for or an example of unnecessary polarization. I just simply donât think that Jeremy is a good example of where there has been unnecessary (more on this below) polarization because I think that he, explicitly and somewhat understanably, just finds the idea of approval regulation for frontier AI abhorrent. So to use Jeremy as evidence or example of unnecessary polarization, we have to ask what he was reacting to, and whether something unnecessary was done to polarize him against us.
Dislightenment âstarted out as a red team reviewâ of FAIR, and FAIR is the most commonly referenced policy proposal in the piece, so I think that Jeremyâs reaction in Dislightenment is best understood as, primarily, a reaction to FAIR. (More generally, I donât know what else he would have been reacting to, because in my mind FAIR was fairly catalytic in this whole debate, though itâs possible Iâm overestimating its importance. And in any case I wasnât on Twitter at the time so may lack important context that heâs importing into the conversation.) In which case, in order to support your general claim about unnecessary polarization, we would need to ask whether FAIR did unnecessary things polarize him.
Which brings us to the question of what exactly unnecessary polarization means. My sense is that avoiding unnecessary polarization would, in practice, mean that policy researchers write and speak extremely defensively to avoid making any unnecessary enemies. This would entail falsifying not just their own personal beliefs about optimal policy, but also, crucially, falsifying their prediction about what optimal policy is from the set of preferences that the public already holds. It would lead to writing positive proposals shot through with diligent and pervasive reputation management, leading to a lot of unnecessary and confusing hedges and disjunctive asides. I think pieces like that can be good, but it would be very bad if every piece was like that.
Instead, I think it is reasonable and preferable for discourse to unfold like this: Policy researchers write politely about the things that they think are true, explain their reasoning, acknowledge limitations and uncertainties, and invite further discussion. People like Jeremy then enter the conversation, bringing a useful different perspective, which is exactly what happened here. And then we can update policy proposals over time, to give more or less weight to different considerations in light of new arguments, political evidence (what do people think is riskier: too much centralization or too much decentralization?) and technical evidence. And then maybe eventually there is enough consensus to overcome the vetocratic inertia of our political system and make new policy. Or maybe a consensus is reached that this is not necessary. Or maybe no consensus is ever reached, in which case the default is nothing happens.
Contrast this with what I think the âreduce unnecessary polarizationâ approach would tend to recommend, which is something closer to starting the conversation with an attempt at a compromise position. It is sometimes useful to do this. But I think that, in terms of actual truth discovery, laying out the full case for oneâs own perspective is productive and necessary. Without full-throated policy proposals, policy will tend too much either towards an unprincipled centrism (wherein all perspectives are seen as equally valid and therefore worthy of compromise) or towards the perspectives of those who defect from the âstart at compromiseâ policy. When the stakes are really high, this seems bad.
To be clear, I donât think youâre advocating for this "compromise-only" position. But in the case of Jeremy and Dislightenment specifically, I think this is what it would have taken to avoid polarization (and I doubt even that would have worked): writing FAIR with a much mushier, âwhoâs to say?â perspective.
In retrospect, I think itâs perfectly reasonable to think that we should have talked about centralization concerns more in FAIR. In fact, I endorse that proposition. And of course it was in some sense unnecessary to write it with the exact discussion of centralization that we did. But I nevertheless do not think that we can be said to have caused Jeremy to unnecessarily polarize against us, because I think him polarizing against us on the basis of FAIR is in fact not reasonable.
On âelite panicâ and âcounter-enlightenmentâ, heâs not directly comparing FAIR to it I think. Heâs saying that previous attempts to avoid democratisation of power in the Enlightenment tradition have had these flaws.
I disagree with this as a textual matter. Here are some excerpts from Dislightenment (emphases added):
Proposals for stringent AI model licensing and surveillance will . . . potentially roll[] back the societal gains of the Enlightenment.
bombing data centers and global surveillance of all computers is the only way[!!!] to ensure the kind of safety compliance that FAR proposes.
FAR briefly considers this idea, saying âfor frontier AI development, sector-specific regulations can be valuable, but will likely leave a subset of the high severity and scale risks unaddressedâ But it . . . promote[s] an approach which, as weâve seen, could undo centuries of cultural, societal, and political development.
He fairly consistently paints FAIR (or licensing more generally, which is a core part of FAIR) as the main policy he is responding to.
I think, from Jeremyâs PoV, that centralization of power is the actual ballgame and what Frontier AI Regulation should be about. So one mention on page 31 probably isnât good enough for him.
It is definitely fair for him to think that we should have talked about decentralization more! But I donât think itâs reasonable for him to polarize against us on that basis. That seems like the crux of the issue.
Jeremyâs reaction is most sympathetic if you model the FAIR authors specifically or the TAI governance community more broadly as a group of people totally unsympathetic to distribution of power concerns. The problem is that that is not true. My first main publication in this space was on the risk of excessively centralized power from AGI; another lead FAIR coauthor was on that paper too. Other coauthors have also written about this issue: e.g., 1; 2; 3 at 46â48; 4; 5; 6. Itâs a very central worry in the field, dating back to the first research agenda. So I really donât think polarization against us on the grounds that we have failed to give centralization concerns a fair shake is reasonable.
I think the actual explanation is that Jeremy and the group of which he is representative have a very strong prior in favor of open-sourcing things, and find it morally outrageous to propose restrictions thereon. While I think a prior in favor of OS is reasonable (and indeed correct), I do not think it reasonable for them to polarize against people who think there should be exceptions to the right to OS things. I think that it generally stems from an improper attachment to a specific method of distributing power without really thinking through the limits of that justification, or acknowledging that there even could be such limits.
You can see this dynamic at work very explicitly with Jeremy. In the seminar you mention, we tried to push Jeremy on whether, if a certain AI system turns out to be more like an atom bomb and less like voting, he would still think it's good to open-source it. His response was that AI is not like an atomic bomb.
Again, a perfectly fine proposition to hold on its own. But it completely fails to either: (a) consider what the right policy would be if he is wrong, (b) acknowledge that there is substantial uncertainty or disagreement about whether any given AI system will be more bomb-like or voting-like.
Thatâs a fine reaction to me, just as itâs fine for you and Marcus to disagree on the relative costs/benefits and write the FAIR paper the way you did.
I agree! But I guess Iâm not sure where the room for Jeremyâs unnecessary polarization comes in here. Do reasonable people get polarized against reasonable takes? No.
I know you're not necessarily saying that FAIR was an example of unnecessary polarizing discourse. But my claim is either (a) FAIR was in fact unnecessarily polarizing, or (b) Jeremy's reaction is not good evidence of unnecessary polarization, because it was a reaction to FAIR.
There's probably a difference between ~July23-Jeremy and ~Nov23Jeremy
I think all of the opinions of his we're discussing are from July 23? Am I missing something?
On the actual points though, I actually went back and skim-listened to the the webinar on the paper in July 2023, which Jeremy (and you!) participated in, and man I am so much more receptive and sympathetic to his position now than I was back then, and I don't really find Marcus and you to be that convincing in rebuttal,
A perfectly reasonable opinion! But one thing that is not evident from the recording is that Jeremy showed up something like 10-20 minutes into the webinar, and so in fact missed a large portion of our presentation. Again, I think this is more consistent with some story other than unnecessary polarization. I don't think any reasonable panelist would think it appropriate to participate in a panel where they missed the presentation of the other panelists, though maybe he had some good excuse.
Hi EAs, Iâm Dee, first-time forum poster but long-time advocate for EA principles since first discovering the movement through Peter Singerâs work. Iâve always had a particular interest in global health and wellbeing, which initially inspired me to complete a medical degree. While I enjoyed my studies, I became somewhat disheartened with the scope of impact I could have as a single doctor in a system largely geared towards treatment rather than prevention of disease. After a career pivot to management consulting for a couple of years, I eventually completed my PhD in epidemiology. Iâm now using my research experience and medical knowledge to tackle complex public health problems.
The more Iâve solidified my own goals to do good, including through my career as well as through giving to effective causes, Iâve sought to further engage with EA content and the community. I look forward to connecting and sharing ideas with you all!
Epidemiology! I hadn't really thought about epidemiology as a career but it strikes me as potentially very high impact, especially if you're going into it with an attention to impact. My basic thinking is that the field of health tends to have some of the lowest-hanging fruit in terms of improving people's lives, and epidemiology can have a leveraged impact by benefiting many people simultaneously (which is also why being a doctor is maybe less goodâthe number of people you can help is much smaller).
If you have thoughts, I am interested in what you think about where are the big problems in epidemiology, or at least where are the big problems that you personally can contribute to. It's not a space I know much about. (You did say the problems are complex which seems true to me so I don't think I am really in a position to understand epidemiology lol.)
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Prioritizing high-value community assets. Effectivealruism.org is the de facto landing page for anyone who googles âeffective altruismâ. Similarly, the EA newsletter is essentially the a mailing list that newbies can join. Historically, I think both these assets have been dramatically underutilized. CEA has acknowledged under-prioritizing effectivealtruism.org (âfor several years promoting the website, including through search engine optimization, was not a priority for usâ) and the staffmember responsible for the newsletter has also acknowledged that this hasnât been a priority ( âthe monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test⌠[But due to competing priorities] I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, Iâd have very little time or brain space to experiment.â) Both assets have the potential to be enormously valuable for many different parts of the EA community.
Creation of good, public growth dashboards. I sincerely hope that CEA will prioritize creating and sharing new and improved dashboards measuring community growth, the absence of which the community has been questioning for nearly a decade. CEAâs existing dashboard provides some useful information, but it has not always been kept up to date (a recent update helped with this, but important information like traffic to effectivealtruism.org and Virtual Program attendance are still quite stale). And even if all the information were fresh, the dashboard in its current state does not really measure the key question (âhow fast is the community growing?â) nor does it provide context on growth (âhow fast is the community growing relative to how fast we want it to grow?â) Measuring growth is a standard activity for businesses, non-profits, and communities; EA has traditionally underinvested in such measurement and I hope that will be changing under Zachâs leadership. If growth is âat the core of [CEAâs] missionâ, CEA is the logical home for producing a community-wide dashboard and enabling the entire community to benefit from it.
Thoughtful reflection on growth measurement. CEAâs last public effort at measuring growth was an October 2023 memo for the Meta Coordination Forum. This project estimated that 2023 vs. 2022 growth was 30% for early funnel projects, 68% for mid funnel projects, and 8% for late funnel project. With the benefit of an additional 18 months of metric data and anecdata, these numbers seem highly overoptimistic. Forum usage metrics have been on a steady decline since FTXâs collapse in late 2022, EAG and EAGx attendance and connections have all decreased in 2023 vs. 2022 and 2024 vs. 2023, the number of EA Funds donors continues to decline on a year over year basis as has been the case since FTXâs collapse, Virtual Program attendance is on a multi-year downward trend, etc. There are a lot of tricky methodological issues to sort out in the process of coming up with a meaningful dashboard and I think the MCF memo generally took reasonable first stabs at addressing them; however, future efforts should be informed by shortcomings that we can now observe in the MCF memo approach.
Transparency about growth strategy and targets. I think CEA should publicly communicate its growth strategy and targets to promote transparency and accountability. This post is a good start, though as Zach writes it is ânot a detailed action plan.The devil will of course be in those details.â To be clear, I think itâs important that Zach (who is relatively new in his role) be given a long runway to implement his chosen growth strategy. The âaccountabilityâ Iâd like to see isnât about e.g. community complaints if CEA fails to hit monthly or quarterly growth targets on certain metrics. Itâs about honest communication from CEA about their long-term growth plan and regularly public check-ins about whether empirical data suggests the plan is going well or not. (FWIW, I think CEA has a lot of room for improvement in this area⌠For instance, Iâve probably read CEAâs public communications much more thoroughly than almost anyone, and I was extremely surprised to see the claim in the OP that âGrowth has long been at the core of our mission.â)
To quickly add on to what Toby wrote: the CEA Online Team has also been redesigning effectivealtruism.org and we expect to soft launch it soon. I post quick takes when we update our half-quarterly plans, so you can follow along there. :)
Thank you for this article. I've read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I've read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you're stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you're not obligated to explain anything to me or to respond at all, I'm just writing this because I think it's generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changesâoff the top of my head:
Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.
The report specifically addresses how the fact that Microsoft Office is so embedded in government means the company can get away with bad practices, but seemingly doesn't connect this to how AI companies might end up in the same position.
Government contracts to buy LLM services increases AI company revenue, which shortens timelines.
The government does not always work in the interests of the people (in fact it frequently works against them!) so making the government more effective/powerful is not pure upside.
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on "win-wins" but doesn't actually say how we can avoid the downsides (or, if it did, I didn't get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it's not clear to me whether that's true.
For what it's worth, I don't think government competence is what's holding us back from having good AI regulations, it's government willingness. I don't see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won't want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was "Human disempowerment by advanced AI", which IMO is an overly euphemistic way of saying "AI will kill everyone".
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
you won't get people to care about extinction risks by pretending they don't exist;
the market is already saturated with AI safety people writing Very Serious Reports in which they pretend that human extinction isn't a serious concern;
AI x-risk is mainstream enough at this point that we can probably stop pretending not to care about it.
There are some recommendations in this article that I like, and if I think it should focus much more on them:
investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the risks of advanced AI
Without better compliance tools, AI companies and AI systems might start taking increasingly consequential actions without regulatorsâ understanding or supervision
[Without oversight], the government may be unable to verify AI companiesâ claims about their testing practices or the safety of their AI models.
Steady AI adoption could backfire if it desensitizes government decision-makers to the risks of AI in government, or grows their appetite for automation past what the government can safely handle.
I also liked the section "Government adoption of AI will need to manage important risks" and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don't really know how to organize this so I'm just going to write a list of lines that stood out to me.
invest in AI and technical talent
What does that mean exactly? I can't think of how you could do that without shortening timelines so I don't know what you have in mind here.
Streamline procurement processes for AI products and related tech
I also don't understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
Gradual adoption is significantly safer than a rapid scale-up.
This sounds plausible but I am not convinced that it's true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
And in a crisis â e.g. after a conspicuous failure, or a jump in the salience of AI adoption for the administration in power â agencies might cut corners and have less time for security measures, testing, in-house development, etc.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
Frontier AI development will probably concentrate, leaving the government with less bargaining power.
I don't think that's how that works. Government gets to make laws. Frontier AI companies don't get to make laws. This is only true if you're talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that's what you're talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn't it do the opposite?
Itâs natural to focus on the broad question of whether we should speed up or slow down government AI adoption. But this framing is both oversimplified and impractical
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it's saying that's not a good framing? So why did the article use that framing? I get the sense that you didn't intend to use that framing, but it comes across as if you're using it.
Hire and retain technical talent, including by raising salaries
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don't think "timelines" are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I'm not saying you're wrong, I'm just saying you can't take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
Explore legal or other ways to avoid extreme concentration in the frontier AI market
(this isn't a disagreement, just a comment:)
You don't say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says "Itâs very unclear whether centralizing would be good or bad", but you're citing it as if it definitively finds centralization to be bad.
If the US government never ramps up AI adoption, it may be unable to properly respond to existential challenges.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn't really matter whether the US government is using AI internally.
Map out scenarios in which AI safety regulation is ineffective and explore potential strategies
I don't think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don't exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the markâright now the priority should be to get any sort of safety regulations at all.
Build emergency AI capacity outside of the government
I am moderately bullish on this idea (I've spoken favorably about Sentinel before) although I don't actually have a good sense of when it would be useful. I'd like to see more projection of under exactly what sort of scenarios "emergency capacity" would be able to prevent catastrophes. Not that that's within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn't seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Is anyone working on an updated version of the biosecurity map? I helped make biosecurity.world and would be happy to help/ mentor someone interested in doing this. Please comment or DM me.
Thanks Vasco â I really appreciate the thoughtful engagement. I think there are a few different things getting a bit mixed together here, so Iâd love to tease them apart and explain where I still see things differently.
You mentioned that the key is the difference in impact, not concern about animals. But Iâd argue that this concern does in fact translate to impact, especially when weâre thinking in terms of counterfactuals and replaceability. For example, if someone applies for a role at SWP, their counterfactual impact is likely just the difference between them and the next-best candidateâwho is almost certainly also deeply concerned about shrimp welfare. But in an EC role, the counterfactual is likely that the position goes to someone who wouldnât raise animal issues at all. So the marginal impact is potentially much greater, even in junior positions.
Weâve already seen specific examples, particularly in the UK, where junior staff inside government have been able to push for progress on animal welfare that would never have happened through lobbying alone. These arenât abstract hypotheticals. Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
2. Hierarchy matters, but so does initiative, positioning, and timing.
Yes, the Commission is large and hierarchical. But so is almost every institution with leverage over major policy. What weâve seen that once someone is in, they can navigate toward departments and roles where theyâre better positioned to influence change. Thatâs part of what this program is about: helping people enter the system with the long game in mind.
Itâs not a passive processâit requires individuals to actively find their leverage points and pockets of influence. A lot depends on the individualâs initiative and ability to spot opportunitiesâbut thatâs true in any sector, whether in NGOs or in policy. I would say though if that doesnt appeal its a sign working in civil service is not a good fit.
You noted that lobbyists can reach many policymakers, which is true. But that doesnât mean theyâre more impactful than internal actorsâitâs highly dependent on context. And critically, lobbyists themselves will tell you (and did on our programme) that what they need most are credible insiders who understand the system, have networks, and can champion ideas from within.
3. External lobbying vs. insider influence is a false binary.
We often hear people argue for becoming a lobbyist instead of going into the system. But I think this skips a vital step: the most effective lobbyists often were insiders first. Without that institutional knowledge, they lack the credibility and relational capital that drives real traction on issues that arenât already politically salientâlike shrimp welfare, for example.
So to me, the idea that someone without any government experience should just jump into policy advocacy seems less plausible than a pathway that starts inside the system, builds knowledge, and later leverages that from a lobbying or NGO position if thatâs where personal fit leads.
So overall, Iâd say the value of this programme comes not from comparing against some hypothetical ârandomâ NGO role, but from offering people a realistic path into a system thatâs historically been quite closed off to animal advocates and an opportunity to build essential career capital to be a more effective advocate in the future.
Thanks for the great clarifications, Lauren! Strongly upvoted.
Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
Interesting example! I would be interested to know more, but I understand it may be sensible information to share publicly. I think one can help 400 M shrimp donating 26.7 k$ (= 400*10^6/(15*10^3)) to the Shrimp Welfare Project (SWP). So, if your example was representative of the impact of a career in policy inside the system, and the impact per animal helped in your example matched that of SWP (which I estimated to be 0.0426 DALYs averted), maximising donations could still be better. For a career of 40 years, one would only need to donate 668 $ (= 26.7*10^3/40) more to SWP per year relative to the career in policy inside the system.
Thanks for giving feedback! I looked at this particular quick take again before April Fool's to make sure we'd fixed the issue. Thanks to @JP Addisonđ¸ for writing the code to make the tags visible.
many downward adjustments (and lack of upward adjustments)
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
Overall, there seems to be a clear trend in Germany toward more vegan products.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Oat milk shelves are larger than cow milk shelves in many retailers nowadays
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
I understand that you are worried about chicken and fish consumption. I have no knowledge about why these charts are the way they are, or why people in the UK consume twice as much chicken as those in Germany. It's also difficult to guess the impact of Veganuary in these trends. Insofar, I find the charts a bit distracting.
What I intended to say with my comment is that Veganuary has clearly visible impacts around me: when I go shopping, when I see ads, when I eat out. This seems to correlate with a general trend of seeing more vegan products, brands, and menu choices. Maybe the general trend that I identified is similarly distracting as your chicken and fish charts... yet it does seem to be something that Veganuary directly works on and influences.
I suspect that you brought up the chicken and fish charts because you worry about shifts in consumption from larger animals to higher numbers of small animals. This is a real possibility, but I would be wary of accusing Veganuary to cause such a shift, without good evidence. I grant that Veganuary tries to appeal to a broad range of people with various reasons for reducing meat consumption, including climate reasons which might cause a shift away from ruminants. But I recall there was a lot of Veganuary content around animal welfare. Personally, Veganuary shifted my views to care more about animals.
Animal welfare seems to be the main participant motivation. Here's a figure from the 2023 survey report:
Taking a step back, it's a little sad that this article feels so hostile towards Veganuary, and shows Veganuary in a bad light primarily because of discounts and back-of-the-envelope numbers that seem quite arbitrary. I see a lot less competition than you do between Veganuary and work on shrimp welfare or cage-free campaigns. On the contrary, people who have participated in Veganuary are likely more receptive for that type of work, and this is a benefit that we won't find in CEAs ;-)
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Veganuary is quite prominent where I live. There are numerous new products in supermarkets. I've seen many ads for vegan products in January, not directly by Veganuary but by franchises like Burger King.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
many downward adjustments (and lack of upward adjustments)
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
Overall, there seems to be a clear trend in Germany toward more vegan products.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Oat milk shelves are larger than cow milk shelves in many retailers nowadays
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Veganuary is quite prominent where I live. There are numerous new products in supermarkets. I've seen many ads for vegan products in January, not directly by Veganuary but by franchises like Burger King.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
many downward adjustments (and lack of upward adjustments)
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
Overall, there seems to be a clear trend in Germany toward more vegan products.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Oat milk shelves are larger than cow milk shelves in many retailers nowadays
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
No idea, it's probably worth reaching out to ask them and alert them in case they aren't already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I don't want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: "Thanks for highlighting this concern, we are very mindful of this throughout our work" which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I don't mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I don't currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
Thanks for the comment, Mathias! I strongly upvoted it. I love the transparency. I emailed Mal Graham, WAI's strategy director, right after my comment.
Yeah, you should talk to someone who knows more about security than myself, but as a couple starting points;
math-proven safe AIs
This is not a thing, and likely cannot he a thing. You can't prove an AI system isn't malign, and work that sounds like it says this is actually doing something very different.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs
You can't know that a given matrix multiplication won't be for an AI system. It's the same operation, so if you can buy or rent GPU time, how would it know what you are doing?
Thank you, for your interest David! Math-proven safe AIs are possible, our group has just achieved it (our researcher writes under a pseudonym for safety reasons, please, ignore it): https://x.com/MelonUsks/status/1907929710027567542
Why it's math-proven safe? Because it's fully static, an LLM by itself is a giant static geometric shape in a file, only GPUs make it non-static, agentic. It's called place AI, it's a type of tool AI.
To address you second question, there is a way to know if a given matrix multiplication is for AI or not. In the cloud we'll have a math-proven safe AI model inside of each math-proven safe GPU: GPU hardware will be remade to be an isolated unit that just spits out output: images, text, etc. Each GPU is an isolated math-proven safe computer, the sole purpose of this GPU computer is safety and hardware+firmware isolation of the AI model from the outside world.
But the main priority is putting all the GPUs in the international scientists controlled clouds, they'll figure out the small details that are left to resolve. Almost all current GPUs (especially consumer ones) are 100% unprotected from the imminent AI agent botnet (think a computer virus but much worse), we can't switch off the whole Internet.
Please, refer to the link above for further information. Thank you for this conversation!
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
But in aggregate, western Europeans get just as much out of their labour as Americans do. Narrowing the gap in total GDP would require additional working hours, either via immigration or by raising the amount of time citizens spend on the job.
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employment rate gap:
This report from Banque de France has it that "the EU-US gap has narrowed in terms of hours worked per capita but has widened in terms of GDP per hours worked", and that in France at least this can be attributed to "producers and heavy users of IT technologies":
The Draghi report says 72% of the EU-US GDP per capita gap is due to productivity, and only 28% is due to labour hours:
Part of the discrepancy may be that the OWID data only goes until 2019, whereas some of these other sources report that the gap has widened significantly since COVID? But that doesn't seem to be the case in the first plot above (it still shows a widening gap before COVID).
Or maybe most of the difference is due to comparing the US to France/Germany, versus also including countries like Greece and Italy that have seen much slower productivity growth. But that doesn't explain the France data above (it still shows a gap between France and the US, even before COVID).
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare.
I understand this. However, the key is the difference in impact, not in concern about animals. I agree people completing the program care much more about animals than a random person in a junior position in EU's institutions, but my impression is that there is limited room for the greater care to translate into helping animals in junior positions. The Commission has 32 k people, whereas the largest organisation recommended by ACE, The Humane League (THL), has 136, so hierarchy matters much more in the former.
And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
Makes sense. On the other hand, a lobbyist can interact with more policymakers than an APA. I do not know whether a lobbyist is more or less impactful than an APA. I think it depends on the specifics.
Thanks Vasco â I really appreciate the thoughtful engagement. I think there are a few different things getting a bit mixed together here, so Iâd love to tease them apart and explain where I still see things differently.
You mentioned that the key is the difference in impact, not concern about animals. But Iâd argue that this concern does in fact translate to impact, especially when weâre thinking in terms of counterfactuals and replaceability. For example, if someone applies for a role at SWP, their counterfactual impact is likely just the difference between them and the next-best candidateâwho is almost certainly also deeply concerned about shrimp welfare. But in an EC role, the counterfactual is likely that the position goes to someone who wouldnât raise animal issues at all. So the marginal impact is potentially much greater, even in junior positions.
Weâve already seen specific examples, particularly in the UK, where junior staff inside government have been able to push for progress on animal welfare that would never have happened through lobbying alone. These arenât abstract hypotheticals. Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
2. Hierarchy matters, but so does initiative, positioning, and timing.
Yes, the Commission is large and hierarchical. But so is almost every institution with leverage over major policy. What weâve seen that once someone is in, they can navigate toward departments and roles where theyâre better positioned to influence change. Thatâs part of what this program is about: helping people enter the system with the long game in mind.
Itâs not a passive processâit requires individuals to actively find their leverage points and pockets of influence. A lot depends on the individualâs initiative and ability to spot opportunitiesâbut thatâs true in any sector, whether in NGOs or in policy. I would say though if that doesnt appeal its a sign working in civil service is not a good fit.
You noted that lobbyists can reach many policymakers, which is true. But that doesnât mean theyâre more impactful than internal actorsâitâs highly dependent on context. And critically, lobbyists themselves will tell you (and did on our programme) that what they need most are credible insiders who understand the system, have networks, and can champion ideas from within.
3. External lobbying vs. insider influence is a false binary.
We often hear people argue for becoming a lobbyist instead of going into the system. But I think this skips a vital step: the most effective lobbyists often were insiders first. Without that institutional knowledge, they lack the credibility and relational capital that drives real traction on issues that arenât already politically salientâlike shrimp welfare, for example.
So to me, the idea that someone without any government experience should just jump into policy advocacy seems less plausible than a pathway that starts inside the system, builds knowledge, and later leverages that from a lobbying or NGO position if thatâs where personal fit leads.
So overall, Iâd say the value of this programme comes not from comparing against some hypothetical ârandomâ NGO role, but from offering people a realistic path into a system thatâs historically been quite closed off to animal advocates and an opportunity to build essential career capital to be a more effective advocate in the future.
Wild Animal Initiative [WAI] is planning on funding a research investigating the welfare effects of screwworm eradication
Great to know! Do you know whether they will cover effects on screwworms, which I worry may make their eradication harmful? I think it is fine to pursue interventions which may be harmful to wild animals nearterm, but then it is important to learn from them to minimise harmful effects in the future.
No idea, it's probably worth reaching out to ask them and alert them in case they aren't already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I don't want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: "Thanks for highlighting this concern, we are very mindful of this throughout our work" which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I don't mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I don't currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
This is super encouraging - I'm impressed how you leaned into the areas where liberal arts students might already have a felt need and interest, both empathetic and smart.
1) Finding a meaninful job (apparently a big deal for Gen Z) 2) Diverse food options including vegetarian meals
No suggestions here unfortunately, at 38 I'm not sure what the youth are into ;).
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically.
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Veganuary is quite prominent where I live. There are numerous new products in supermarkets. I've seen many ads for vegan products in January, not directly by Veganuary but by franchises like Burger King.
During Veganuary 2020, my wife and I made the decision to become vegan. We had been vegetarians before, and found out during Veganuary that a fully vegan lifestyle was easier than expected. Since then, one of our flatmates transitioned from omnivore to vegan. Another flatmate stayed omnivore but ate mostly vegan during the year that she lived with us. This is an extreme example, but it shows that the 31 emails can affect more than just one person, and for a duration longer than 6 months.
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
Re Anthropic and (unpopular) parallels to FTX, just thinking that it's pretty remarkable that no one has brought up the fact that SBF, Caroline Ellison and FTX were major funders ofAnthropic. Arguably Anthropic wouldn't be where they are today without their help! It's unfortunate the journalist didn't press them on this.
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Prioritizing high-value community assets. Effectivealruism.org is the de facto landing page for anyone who googles âeffective altruismâ. Similarly, the EA newsletter is essentially the a mailing list that newbies can join. Historically, I think both these assets have been dramatically underutilized. CEA has acknowledged under-prioritizing effectivealtruism.org (âfor several years promoting the website, including through search engine optimization, was not a priority for usâ) and the staffmember responsible for the newsletter has also acknowledged that this hasnât been a priority ( âthe monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test⌠[But due to competing priorities] I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, Iâd have very little time or brain space to experiment.â) Both assets have the potential to be enormously valuable for many different parts of the EA community.
Creation of good, public growth dashboards. I sincerely hope that CEA will prioritize creating and sharing new and improved dashboards measuring community growth, the absence of which the community has been questioning for nearly a decade. CEAâs existing dashboard provides some useful information, but it has not always been kept up to date (a recent update helped with this, but important information like traffic to effectivealtruism.org and Virtual Program attendance are still quite stale). And even if all the information were fresh, the dashboard in its current state does not really measure the key question (âhow fast is the community growing?â) nor does it provide context on growth (âhow fast is the community growing relative to how fast we want it to grow?â) Measuring growth is a standard activity for businesses, non-profits, and communities; EA has traditionally underinvested in such measurement and I hope that will be changing under Zachâs leadership. If growth is âat the core of [CEAâs] missionâ, CEA is the logical home for producing a community-wide dashboard and enabling the entire community to benefit from it.
Thoughtful reflection on growth measurement. CEAâs last public effort at measuring growth was an October 2023 memo for the Meta Coordination Forum. This project estimated that 2023 vs. 2022 growth was 30% for early funnel projects, 68% for mid funnel projects, and 8% for late funnel project. With the benefit of an additional 18 months of metric data and anecdata, these numbers seem highly overoptimistic. Forum usage metrics have been on a steady decline since FTXâs collapse in late 2022, EAG and EAGx attendance and connections have all decreased in 2023 vs. 2022 and 2024 vs. 2023, the number of EA Funds donors continues to decline on a year over year basis as has been the case since FTXâs collapse, Virtual Program attendance is on a multi-year downward trend, etc. There are a lot of tricky methodological issues to sort out in the process of coming up with a meaningful dashboard and I think the MCF memo generally took reasonable first stabs at addressing them; however, future efforts should be informed by shortcomings that we can now observe in the MCF memo approach.
Transparency about growth strategy and targets. I think CEA should publicly communicate its growth strategy and targets to promote transparency and accountability. This post is a good start, though as Zach writes it is ânot a detailed action plan.The devil will of course be in those details.â To be clear, I think itâs important that Zach (who is relatively new in his role) be given a long runway to implement his chosen growth strategy. The âaccountabilityâ Iâd like to see isnât about e.g. community complaints if CEA fails to hit monthly or quarterly growth targets on certain metrics. Itâs about honest communication from CEA about their long-term growth plan and regularly public check-ins about whether empirical data suggests the plan is going well or not. (FWIW, I think CEA has a lot of room for improvement in this area⌠For instance, Iâve probably read CEAâs public communications much more thoroughly than almost anyone, and I was extremely surprised to see the claim in the OP that âGrowth has long been at the core of our mission.â)
Hey! I'm the current staff-member working on the EA Newsletter - and I'm currently working on the EA Newsletter improvement project we didn't have time for before. So far this has been:
Making the EA Newsletter sign-up box more prominent in a few places (EA.org and CEA.org) + adding a link to the EA reddit side-panel (surprisingly big community).
Making the sign-up flow single-opt-in.
Designing better metrics to track impact and growth.
Re-writing the intro email campaign people get when they sign up and A/B testing it - this started recently so no new findings yet, but we should have info to improve it at the end of the month.
The next step is more seriously thinking about marketing, considering advertising it, integrating it more with other CEA touchpoints etc... Stay tuned.
Also, I always welcome any suggestions for low-hanging fruit in Newsletter marketing (I'm sure there is a lot of this), as well as general feedback on the Newsletter itself.
Wild Animal Initiative [WAI] is planning on funding a research investigating the welfare effects of screwworm eradication
Great to know! Do you know whether they will cover effects on screwworms, which I worry may make their eradication harmful? I think it is fine to pursue interventions which may be harmful to wild animals nearterm, but then it is important to learn from them to minimise harmful effects in the future.
"âŚthere is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients).
Seems false, unless he's using "general agreement" and "foreseeable" in some very narrow sense?
I feel like this should be caveated with a "long timelines have gotten short... within people the author knows about in tech circles".
I mean, just two months ago someone asked a room full of cutting edge computational physicists whether their job could be replaced by an AI soon, and the response was audible laughter and a reply of "not in our lifetimes".
On one side you could say that this discrepancy is because the computational physicists aren't as familiar with state of the art genAI, but on the flipside, you could point out that tech circles aren't familiar with state of the art physics, and are seriously underestimating the scale of task ahead of them.
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
But in aggregate, western Europeans get just as much out of their labour as Americans do. Narrowing the gap in total GDP would require additional working hours, either via immigration or by raising the amount of time citizens spend on the job.
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
Thanks for writing this, this is also something I have been thinking about and you've expressed it more eloquently.
One thing I have thought might be useful is at times showing restraint with job titling. I've observed cases where people have had a title for example Director in a small org or growing org, and in a larger org this role might be a coordinator, lead, admin.
I've thought at times this doesn't necessarily set people up for long term career success as the logical career step in terms of skills and growth, or a career shift, often is associated with a lower sounding title. Which I think decreases motivation to take on these roles.
At the same time I have seen people, including myself, take a decrease in salary and title, in order to shift careers and move forward.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare.
I understand this. However, the key is the difference in impact, not in concern about animals. I agree people completing the program care much more about animals than a random person in a junior position in EU's institutions, but my impression is that there is limited room for the greater care to translate into helping animals in junior positions. The Commission has 32 k people, whereas the largest organisation recommended by ACE, The Humane League (THL), has 136, so hierarchy matters much more in the former.
And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
Makes sense. On the other hand, a lobbyist can interact with more policymakers than an APA. I do not know whether a lobbyist is more or less impactful than an APA. I think it depends on the specifics.
That's great to hear! BlueDot has been my main resource for getting to grips with AI. Please feel free to share any ideas that come up as you explore how this applies to your own advocacy :-)
This is a critically important and well-articulated post, thank you for defining and championing the Moral Alignment (MA) space. I strongly agree with the core arguments regarding its neglect compared to technical safety, the troubling paradox of purely human-centric alignment given our history, and the urgent need for a sentient-centric approach.
You rightly highlight Sam Altman's question: "to whose values do you align the system?" This underscores that solving MA isn't just a task for AI labs or experts, but requires much broader societal reflection and deliberation. If we aim to align AI with our best values, not just a reflection of our flawed past actions, we first need robust mechanisms to clarify and articulate those values collectively.
Building on your call for action, perhaps a vital complementary approach could be fostering this deliberation through a widespread network of accessible "Ethical-Moral Clubs" (or perhaps "Sentientist Ethics Hubs" to align even closer with your theme?) across diverse communities globally.
These clubs could serve a crucial dual purpose:
Formulating Alignment Goals: They would provide spaces for communities themselves to grapple with complex ethical questions and begin articulating what kind of moral alignment they actually desire for AI affecting their lives. This offers a bottom-up way to gather diverse perspectives on the "whose values?" question, potentially identifying both local priorities and identifying shared, potentially universal principles across regions.
Broader Ethical Education & Reflection: These hubs would function as vital centers for learning. They could help participants, and by extension society, better understand different ethical frameworks (including the sentientism central to your post), critically examine their own "stated vs. realized" values (as you mentioned), and become more informed contributors to the crucial dialogue about our future with AI.
Such a grassroots network wouldn't replace the top-down efforts and research you advocate for, but could significantly support and strengthen the MA movement you envision. It could cultivate the informed public understanding, deliberation, and engagement necessary for sentient-centric AI to gain legitimacy and be implemented effectively and safely.
Ultimately, fostering collective ethical literacy and structured deliberation seems like a necessary foundation for ensuring AI aligns with the best of our values, benefiting all sentient beings. Thanks again for pushing this vital conversation forward.
I was just thinking about writing a post like this after listening to https://www.astralcodexten.com/p/introducing-ai-2027 and especially the end where they're talking about getting into blogging, and thinking about the massive blind spot Rationalists seem to have for sentientism. I'm particularly interested in ways to get involved and help push this cause forward. Especially as someone who frankly, feels pretty helpless with the mass scale of non-human suffering and mass amount of human apathy towards it, as well as the many flaws in the current animal rights movement.
I think creating content on these topics is very valuable, and I am happy to brainstorm other options. I will also do a post on possible interventions.
"âŚthere is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients).
Seems false, unless he's using "general agreement" and "foreseeable" in some very narrow sense?
Yes, the only realistic and planet-wide 100% safe solution is this: putting all the GPUs in safe cloud/s controlled by international scientists that only make math-proven safe AIs and only stream output to users.
Each user can use his GPU for free from the cloud on any device (even on phone), when the user doesn't use it, he can choose to earn money by letting others use his GPU.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs instead of physical. Because GPUs are nukes are we want no nukes or to put them deep underground in one place so they can be controlled by international scientists.
Computer viruses we still didn't 100% solve (my mom had an Android virus recently), even iPhone and Nintendo Switch got jailbroken almost instantly, there are companies jailbreak iPhones as a service. I think Google Docs never got jailbroken, and majorly hacked, it's a cloud service, so we need to base our AI and GPU security on this best example, we need to have all our GPUs in an internationally scientist controlled cloud.
Else we'll have any hacker write a virus (just to steal money) with an AI agent component, grab consumer GPUs like cup-cakes, AI agent can even become autonomous (and we know they become evil in major ways, want to have a tea party with Stalin and Hitler - there was a recent paper - if given an evil goal. Will anyone align AIs for hackers or hacker themself will do it perfectly (they won't) to make an AI agent just to steal money but be a slave and do nothing else bad?)
Yeah, you should talk to someone who knows more about security than myself, but as a couple starting points;
math-proven safe AIs
This is not a thing, and likely cannot he a thing. You can't prove an AI system isn't malign, and work that sounds like it says this is actually doing something very different.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs
You can't know that a given matrix multiplication won't be for an AI system. It's the same operation, so if you can buy or rent GPU time, how would it know what you are doing?
Easter (April 20th this year) is another unique opportunity: Likely less defensiveness addressing annual egg decorating/tossing/hiding/etc. than confronting daily diets
Thank you for this post. I think it does a great job of outlining the double-edged sword we're facing - - the potential for AI to either end enormous suffering or amplify it exponentially.
Your suggestion to reframe our movement's goal really expanded my thinking: "ensure that advanced AI and the people who control it are aligned with animals' interests by 2030." This feels urgent and necessary given the timelines you've outlined.
I'm particularly concerned that our society's current commodified view of animals could be baked into AGI systems and scaled to unprecedented levels.
The strategic targets you've identified make perfect sense - especially the focus on AI/animal collaborations and getting animal advocates into rooms where AGI decisions are being made. We should absolutely be leveraging AI-powered advocacy tools while we can still shape their development.
Thank you for this clarity. I'll be thinking much more deeply about how my own advocacy work needs to adapt to this possible near-future scenario.
Woah, huge congratulations on getting 80 pledges! Thatâs a really incredible achievement - I hope you all feel proud :)
I would guess that established uni groups at big schools donât get 80 pledges per year; you might consider reaching out to GWWC (community@givingwhatwecan.org) to brainstorm how to make the most of this amazing momentum.
I donât have experience in student group organizing (not starting an EA group at my college is my biggest regret in life), but Iâd recommend looking into whether your campus career center is open to co-hosting events and working with students on applying to high-impact roles.
At the liberal arts school I went to, events hosted by the career center tended to be pretty well-attended. Plus, you can lean on the job boards from 80k, Probably Good, and Animal Advocacy Careers to direct students to real world opportunities.
Another idea is to look into whether you can teach a student forum about EA for college credit! It really lowers the bar for students to commit to weekly meetings/readings if they can substitute it for another class.
And if students in your club are ever interested in talking to someone about entry-level operations or grantmaking work, Iâm always excited to call!
Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue.
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
No one is critiquing Danielaâs personal life though, theyâre critiquing something about her public life (ie her voluntary public statements to journalists) for contradicting what sheâs said in her personal life. Compare this with a common reason people get cancelled where the critique is that thereâs something bad in their personal life, and people are disappointed that the personal life doesnât reflect the public persona- in this case itâs the other way around.
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Prioritizing high-value community assets. Effectivealruism.org is the de facto landing page for anyone who googles âeffective altruismâ. Similarly, the EA newsletter is essentially the a mailing list that newbies can join. Historically, I think both these assets have been dramatically underutilized. CEA has acknowledged under-prioritizing effectivealtruism.org (âfor several years promoting the website, including through search engine optimization, was not a priority for usâ) and the staffmember responsible for the newsletter has also acknowledged that this hasnât been a priority ( âthe monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test⌠[But due to competing priorities] I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, Iâd have very little time or brain space to experiment.â) Both assets have the potential to be enormously valuable for many different parts of the EA community.
Creation of good, public growth dashboards. I sincerely hope that CEA will prioritize creating and sharing new and improved dashboards measuring community growth, the absence of which the community has been questioning for nearly a decade. CEAâs existing dashboard provides some useful information, but it has not always been kept up to date (a recent update helped with this, but important information like traffic to effectivealtruism.org and Virtual Program attendance are still quite stale). And even if all the information were fresh, the dashboard in its current state does not really measure the key question (âhow fast is the community growing?â) nor does it provide context on growth (âhow fast is the community growing relative to how fast we want it to grow?â) Measuring growth is a standard activity for businesses, non-profits, and communities; EA has traditionally underinvested in such measurement and I hope that will be changing under Zachâs leadership. If growth is âat the core of [CEAâs] missionâ, CEA is the logical home for producing a community-wide dashboard and enabling the entire community to benefit from it.
Thoughtful reflection on growth measurement. CEAâs last public effort at measuring growth was an October 2023 memo for the Meta Coordination Forum. This project estimated that 2023 vs. 2022 growth was 30% for early funnel projects, 68% for mid funnel projects, and 8% for late funnel project. With the benefit of an additional 18 months of metric data and anecdata, these numbers seem highly overoptimistic. Forum usage metrics have been on a steady decline since FTXâs collapse in late 2022, EAG and EAGx attendance and connections have all decreased in 2023 vs. 2022 and 2024 vs. 2023, the number of EA Funds donors continues to decline on a year over year basis as has been the case since FTXâs collapse, Virtual Program attendance is on a multi-year downward trend, etc. There are a lot of tricky methodological issues to sort out in the process of coming up with a meaningful dashboard and I think the MCF memo generally took reasonable first stabs at addressing them; however, future efforts should be informed by shortcomings that we can now observe in the MCF memo approach.
Transparency about growth strategy and targets. I think CEA should publicly communicate its growth strategy and targets to promote transparency and accountability. This post is a good start, though as Zach writes it is ânot a detailed action plan.The devil will of course be in those details.â To be clear, I think itâs important that Zach (who is relatively new in his role) be given a long runway to implement his chosen growth strategy. The âaccountabilityâ Iâd like to see isnât about e.g. community complaints if CEA fails to hit monthly or quarterly growth targets on certain metrics. Itâs about honest communication from CEA about their long-term growth plan and regularly public check-ins about whether empirical data suggests the plan is going well or not. (FWIW, I think CEA has a lot of room for improvement in this area⌠For instance, Iâve probably read CEAâs public communications much more thoroughly than almost anyone, and I was extremely surprised to see the claim in the OP that âGrowth has long been at the core of our mission.â)
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
I agree with this being weird / a low blow in general, but not in this particular case. The crux with your footnote may be that I see this as more than a continuum.
I think someone's interest in private communications becomes significantly weaker as they assume a position of great power over others, conditioned on the subject matter of the communication being a matter of meaningful public interest. Here, I think an AI executive's perspective on EA is a matter of significant public interest.
Second, I do not find a wedding website to be a particularly private form of communication compared to (e.g.) a private conversation with a romantic partner. Audience in the hundreds, no strong confidentiality commitment, no precautions to prevent public access.
The more power the individual has over others, the wider the scope of topics that are of legitimate public interest for the others to bring up and the narrower the scope of communications that citing would be a weird / low. So what applies to major corporate CEOs with significant influence over the future would not generally apply to most people.
Compare this to paparazzi, who hound celebrities (who do not possess CEO-level power) for material that is not of legitimate public interest, and often under circumstances in which society recognizes particularly strong privacy rights.
I'm reminded of the NBA basketball-team owner who made some racist basketball-related comments to his affair partner, who leaked them. My recollection is that people threw shade on the affair partner (who arguably betrayed his confidences), but few people complained about showering hundreds of millions of dollars worth of tax consequences on the owner by forcing the sale of his team against his will. Unlike comments to a medium-size audience on a website, the owner's comments were particularly private (to an intimate figure, 1:1, protected from non-consensual recording by criminal law).
I was just thinking about writing a post like this after listening to https://www.astralcodexten.com/p/introducing-ai-2027 and especially the end where they're talking about getting into blogging, and thinking about the massive blind spot Rationalists seem to have for sentientism. I'm particularly interested in ways to get involved and help push this cause forward. Especially as someone who frankly, feels pretty helpless with the mass scale of non-human suffering and mass amount of human apathy towards it, as well as the many flaws in the current animal rights movement.
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Prioritizing high-value community assets. Effectivealruism.org is the de facto landing page for anyone who googles âeffective altruismâ. Similarly, the EA newsletter is essentially the a mailing list that newbies can join. Historically, I think both these assets have been dramatically underutilized. CEA has acknowledged under-prioritizing effectivealtruism.org (âfor several years promoting the website, including through search engine optimization, was not a priority for usâ) and the staffmember responsible for the newsletter has also acknowledged that this hasnât been a priority ( âthe monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test⌠[But due to competing priorities] I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, Iâd have very little time or brain space to experiment.â) Both assets have the potential to be enormously valuable for many different parts of the EA community.
Creation of good, public growth dashboards. I sincerely hope that CEA will prioritize creating and sharing new and improved dashboards measuring community growth, the absence of which the community has been questioning for nearly a decade. CEAâs existing dashboard provides some useful information, but it has not always been kept up to date (a recent update helped with this, but important information like traffic to effectivealtruism.org and Virtual Program attendance are still quite stale). And even if all the information were fresh, the dashboard in its current state does not really measure the key question (âhow fast is the community growing?â) nor does it provide context on growth (âhow fast is the community growing relative to how fast we want it to grow?â) Measuring growth is a standard activity for businesses, non-profits, and communities; EA has traditionally underinvested in such measurement and I hope that will be changing under Zachâs leadership. If growth is âat the core of [CEAâs] missionâ, CEA is the logical home for producing a community-wide dashboard and enabling the entire community to benefit from it.
Thoughtful reflection on growth measurement. CEAâs last public effort at measuring growth was an October 2023 memo for the Meta Coordination Forum. This project estimated that 2023 vs. 2022 growth was 30% for early funnel projects, 68% for mid funnel projects, and 8% for late funnel project. With the benefit of an additional 18 months of metric data and anecdata, these numbers seem highly overoptimistic. Forum usage metrics have been on a steady decline since FTXâs collapse in late 2022, EAG and EAGx attendance and connections have all decreased in 2023 vs. 2022 and 2024 vs. 2023, the number of EA Funds donors continues to decline on a year over year basis as has been the case since FTXâs collapse, Virtual Program attendance is on a multi-year downward trend, etc. There are a lot of tricky methodological issues to sort out in the process of coming up with a meaningful dashboard and I think the MCF memo generally took reasonable first stabs at addressing them; however, future efforts should be informed by shortcomings that we can now observe in the MCF memo approach.
Transparency about growth strategy and targets. I think CEA should publicly communicate its growth strategy and targets to promote transparency and accountability. This post is a good start, though as Zach writes it is ânot a detailed action plan.The devil will of course be in those details.â To be clear, I think itâs important that Zach (who is relatively new in his role) be given a long runway to implement his chosen growth strategy. The âaccountabilityâ Iâd like to see isnât about e.g. community complaints if CEA fails to hit monthly or quarterly growth targets on certain metrics. Itâs about honest communication from CEA about their long-term growth plan and regularly public check-ins about whether empirical data suggests the plan is going well or not. (FWIW, I think CEA has a lot of room for improvement in this area⌠For instance, Iâve probably read CEAâs public communications much more thoroughly than almost anyone, and I was extremely surprised to see the claim in the OP that âGrowth has long been at the core of our mission.â)
We (the CEA Events Team) recently posted about how we cut costs for EA Global last year. That's a big contributing factor, and involved hiring someone (a production associate) to help us cut overall costs.
Staff costs are a relatively small proportion of our total spending, but the proportion increased in 2024 compared to 2023 (28% vs 21%).
Between 2021 and 2023, our total spending increased by 264% (from $6.9m to $25.1m), while our headcount increased only 40% (from 24 to 34), which meant we had insufficient capacity to improve the quality and cost-effectiveness of our programs. This informed our decision to make foundation-building our organizational priority in 2024, including both investing in hiring to increase our capacity and cutting non-staff costs, with the majority of savings (per Ollie's comment) being contributed by lower spending on events, especially EAG.
I suspect the crux of the disagreement might be a skepticism about the potential impact of working within the system
I believe there are positions within the system which are more impactful than a random one in ACE's recommended charities. However, I think those are quite senior, and therefore super hard to get, especially for people wanting to go against the system in the sense of prioritising animal welfare much more.
they are often more replaceable in these roles than they would be in an APA position and their impact is limited only to the difference between their skills and the next best candidate which for many roles is not that much.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique
Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me.
I feel like mainstream people like EA until they understand the implications and are faced with their first trade-off for who to help. To keep them engaged, maybe the new CEA could skip the prioritization part and just focus on making people feel better about their initial cause.
RP actually did some empirical testing on this and we concluded that people really like the name "Effective Altruism", but not the ideas, values or mission.
That's unfortunate. But I think it suggests there's scope for a new 'Centre for Effective Altruism' to push forward exciting new ideas that have more mainstream appeal, like raising awareness of the cause du jour, while the rebranded Center for ââââââââ continues to focus on all the unpopular stuff.
I feel like mainstream people like EA until they understand the implications and are faced with their first trade-off for who to help. To keep them engaged, maybe the new CEA could skip the prioritization part and just focus on making people feel better about their initial cause.
That's interesting and I'm sad to hear about people declining jobs due those reasons. On the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique. I would imagine though there are a bunch of ways to avoid the "EA limelight" for many positions though, of course not public facing ones.
Slight quibble though I would consider "Jane Doe sucks for these reasons" an order of magnitude more objectionable than quoting a wedding website to make a point. Maybe wedding website are sacrosanct in a way in missing tho...
the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique
Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me.
Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.
So this is a cool business of which I was previously unaware, so thanks for posting.
A key question that came to mind when reading your post and site was: whatâs stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way.
Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?
Thank you! We're enjoying her :) There's nothing stopping clients from going straight to EASE - that's part of why we make it publicly available: we want people to have easy access to qualified professionals. However, there are a few scenarios in which we can help:
They're not exactly sure what type of service they need, what to ask for, and what to expect from the engagement; I often find people asking for one thing when they really need another. We'll help them navigate that.
There are multiple service providers and they're not sure who to choose
We do have relationships with many more service providers - the ones on EASE are just the ones that have worked with EA clients before and are familiar (or part of) the space
So that's why we make the matchmaking service free. It's an easy way to provide value and make sure orgs get the right support.
I do hope that over time, we'll have enough trust from the community that our opinion will matter!
For any partners who work at similar organizations, their arrangement with their employers is their own affair; if they're working full time there, they're doing other work on the side (although I believe that the majority of the professionals have their own businesses).
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
But in aggregate, western Europeans get just as much out of their labour as Americans do. Narrowing the gap in total GDP would require additional working hours, either via immigration or by raising the amount of time citizens spend on the job.
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
Most changed mind votes in history of EA comments? This blew my mind a bit, I feel like I've read so much about American productivity outpacing Europe, think this deserves a full length article.
Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue.
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
That's interesting and I'm sad to hear about people declining jobs due those reasons. On the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique. I would imagine though there are a bunch of ways to avoid the "EA limelight" for many positions though, of course not public facing ones.
Slight quibble though I would consider "Jane Doe sucks for these reasons" an order of magnitude more objectionable than quoting a wedding website to make a point. Maybe wedding website are sacrosanct in a way in missing tho...
Wow again I just haven't moved in circles where this would even be considered. Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue. Although we all have a "public profile" in theory, very few people are famous/powerful enough for it to count.
I don't think I believe in a public/private disconnect but I'll think about it some more. I believe in integrity and honesty in most situations, especially when your are publicly disparaging a movement. If you have chosen to lie and smear a movement with"My impression is that it's a bit of an outdated term" then I think this makes what you say a bit more fair game than for other statements where you aren't low-key attacking a group of well meaning people.
Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue.
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
Executive summary: WorkStream Nonprofit has launched new service offeringsâincluding executive assistant support, bookkeeping, tech implementation, and hiring helpâto strengthen nonprofit operational capacity and impact, alongside free resources and an upcoming accelerator program.
Key points:
WorkStream Nonprofit aims to eliminate operational bottlenecks for nonprofits by offering tailored support in operations, staffing, and systems to maximize impact.
Four new paid services have been introduced: executive assistant support ($800+/month), bookkeeping services ($500+/month), tech systems implementation, and hiring process design.
Free resources include consulting sessions, educational content, and matchmaking to service providers (with pro bono matchmaking coming soon).
Client testimonials highlight significant operational improvements and time savingsâe.g., 1,000+ hours saved annually at one organization.
Applications are open for a revamped 6-month nonprofit accelerator, which includes infrastructure and staff training for $2,500 per organization.
The organization invites partners for pro bono services, service ideas, and donor support to sustain its accessible offerings.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: EA UPYâs coordinated and active participation in EAGx CDMX 2025 fostered individual growth, community building, and meaningful connections, with strong pre-event preparation enabling a highly impactful experience for members.
Key points:
EA UPY comprised 17.6% of EAGx CDMX attendees, with 34 participantsâmostly students and professionalsâengaging as speakers, volunteers, and meetup facilitators.
Pre-event preparation, including workshops on career planning and 1-on-1s, helped maximize the impact of participation and was led by Jorge Luis Castillo Ruz and Janeth Valdivia.
EA UPY members led or contributed to key initiatives such as INFOSEC and AI Safety meetups, a panel on community building in Latin America, and the EA Mexico meetup.
Participants reported gaining insights on AI governance, biosecurity, and career development, with many citing motivation and valuable networking as key takeaways.
Connections made at the event are expected to lead to professional opportunities, collaborations, and increased national and global engagement for EA UPY.
Feedback highlighted the value of structured preparation, with suggestions for more institutional support and online prep activities to increase accessibility.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
As a single data point: seconded. I've explicitly been asked by interviewers (in a job interview) why I left a "higher title job" for a "lower title job," with the implication that it needed some special justification. I suspect there have also been multiple times in which someone looking at my resume saw that transition, made an assumption about it, and choose to reject me. (although this probably happens with non-EA jobs more often than EA jobs, as the "lower title role" was with a well-known EA organization)
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
Dario, Anthropicâs CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
Amanda Askell was the 67th signatory of the GWWC pledge.
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.
On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008." The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
But I think itâs dishonest to suggest there arenât strong ties between Anthropic and the EA community.
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I think itâs a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I never interpreted that to be the crux/problem here. (I know I'm late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that expression despite its unfortunate history). I'd be equally annoyed if they downplayed some significant other thing unrelated to EA.
Sure, you might say it's fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren't "on message." Probably they were just trying to avoid actually-unfair bad press here? Still, it's clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists?
More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.
When should I recommend 80K advising to student group members?
Leaving this as Huon's replied!
[from earlier answer] If youâre not sure about whether someoneâs plans are sufficiently AI-focused (e.g. Global Priorities Research), Iâd personally recommend suggesting they apply (with caveats about higher chance of being unsuccessful). The initial application form is really short!
Where else should I be recommending further careers advice?
In addition to AAC and Probably Good, Alicia mentioned Magnify and Center on Long-Term Risk. These are the options weâre aware of right now!
Are there any recommendations for changing recommendations about 80K website content?
I donât think so - I definitely wouldnât remove specific articles from the fellowship syllabuses! However, if you were recommending the website generally before, you could consider instead recommending specific pages like problem profiles or career reviews.
Per Jessâ comment here, the current career planning template will remain useful to students regardless of their focus.
I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error.
In R&D, you're trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work.
Yes, this seems right, hard to know which effect will dominate. I'm guessing you could assemble pretty useful training data of past R&D breakthroughs which might help, but that will only get you so far.
This is super interesting and rhymes a bit with my own efforts to connect the EA community with another one that also has overlap in values but a distinct culture (harm reduction). I took a lot from another earlier post on this topic as well: https://forum.effectivealtruism.org/posts/8Qdc5mPyrfjttLCZn/learning-from-non-eas-who-seek-to-do-good
It's cool to see the members of the tantric retreat were open to learning from EA - are there any learnings you think this community in turn offers to EA?
I see a lot of value in some of the practices, skills when it come to being attuned to emotions and mind states, and communication norms that allow issues to be brought up and handled in meetings.
This avoids failure modes like resentment building up over time, or unacknowledged resentment/tiredness/stress affecting the outcomes of meetings and interactions.
I have a substack where I write about a lot of different topics, including presenting some ideas I believe can be helpful to EA/lw audiences. honestliving.substack.com.
Right now I'm doing a piece on breathwork, as a tool for rapid stress decrease, alertness increase, and "resetting" thinking patterns and state of mind. I have talked to some eas who get stuck in rabbit holes/sub-branches of a problem, and find themselves unstuck the next morning, with sleep "resetting" some unacknowledged assumptions. Breathwork do the same for me, but quicker. Picking it up and becoming used takes max 20min x a week, with low risk if handled with care.
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
Dario, Anthropicâs CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
Amanda Askell was the 67th signatory of the GWWC pledge.
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.
On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008." The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
But I think itâs dishonest to suggest there arenât strong ties between Anthropic and the EA community.
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I think itâs a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
Just as a side point, I do not think Amanda's past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It's not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don't have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn't a public figure back then, but I'd be genuinely shocked to find out she didn't have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I'm accusing her of dishonesty here or anything: she didn't say that she wasn't EA or that she had never been EA, just that Anthropic wasn't an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
That's why I write my essays and try and get the word out. Because even if the rope is tight around your neck and there seems like no way to get out of it, you should still kick your feet and try.
I think itâs good â essential, even â that you keep trying and speaking out. Sometimes thatâs what helps others to act too. The only thing I worry about is that this fight, if framed only as hopeless, can paralyze the very people who might help change the trajectory. Despair can be as dangerous as denial.
Thatâs why I believe the effort itself matters â not because it guarantees success, but because it keeps the door open for others to walk through.
Your questions come from a frame of altruism as obligatory, and while I feel that force (it's what got me into the movement), I would propose excitement as a healthier more sustainable frame; see Tyler Alterman's story for an example of such a shift
There is no obligation to maximise one's income for donating to impactful charities that necessarily entail a lot of personal sacrifice. There are examples of people who I admire for doing earning to give well, but AFAICT they take the opportunity frame, e.g. Jeff Kaufman and Julia Wise, AGB and Denise Melchin, etc (do correct me if you're reading this and think I misrepresented you?)
The reasoning that top roles would almost certainly be filled by top candidates so there's no point to worrying about them being filled is counterproductive in the aggregate, and ignores that you may be one of those top candidates, which you can only really find out by applying â it's also useful to reframe the job application process as an information-gathering exercise in personal fit, instead of just assuming no-fit (not very evidence-based, that)
You shouldn't be troubled by the dilemma of pursuing a career in harmful industries: to first approximation, just don't do it, there are many reasons why you shouldn't. I'd classify this line of reasoning under the perils of naive maximisation, and note that it's really hard to avoid harm as a fanatic of anything in general (and utilitarianism in particular)
On AI, I'll let others chime in, although I think 80,000 Hours' primer on mitigating AI risks is a solidly comprehensive introduction that should help you understand why many EAs prioritise it, and argue against its specific points / framing etc to be more substantive. The other thing I'd point out is that AI's mindshare on the forum is disproportionate to other proxies for "emphasis", like talent (FTEs) and funding vs other areas, and it's worth clarifying what you have in mind / are concerned about
I also think taking a historical perspective on how the movement emerged may illuminate the AI thing for you â the ideas underpinning EA came not just from the global health & development side via the charity evaluators (Karnofsky, Hassenfeld, etc), but also philosophers (Singer, Parfit, etc) and transhumanists (Bostrom, Yudkowsky, etc), the lattermost of whom had been thinking about the consequences of radical future technological change, in particular events which might drastically curtail humanity's future astronomically-large potential. When MacAskill and Todd created the Center for Effective Altruism as an umbrella org for 80K and GWWC way back when, the "Effective Altruism" part was intended to be a purely descriptive part of CEA's name, but then took a life of its own (becoming a question, an ideology, a social movement that wants to be more question than ideology, etc) that gradually encompassed all these ostensibly disparate ideas under a sort of pluralistic banner of doing good better. Not everyone under this banner agrees with each other
Itâs an interesting hypothesis. I think one way in which a SIE can be encouraged is through AI and data enabled financial / risk modelling of any given R&D project.
I was writing on this yesterday, serendipitously!
AI financial risk quantification might significantly improve the accuracy of priors or other probabilistic model variables that evaluate any given R&D IP for a market, meaning we might well be on the cusp of a gradual transition to an economy that is increasingly (one day entirely?) R&D focused, on the assumption that AIs or AI enabled R&D is more likely to perform competitively, either for direct or indirect reasons (I think the psychological component of AI augmenting the way people think about (or copilot to solve) problems is still an open area for approachâŚ).
Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.
So this is a cool business of which I was previously unaware, so thanks for posting.
A key question that came to mind when reading your post and site was: whatâs stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way.
Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?
I spent most of my early career as a data analyst in industry, which engendered in me a deep wariness of quantitative data sources and plumbing, and a neverending discomfort at how often others tended to just take them as given for input into consequential decision-making, even if at an intellectual level I knew their constraints and other priorities justified it and they were doing the best they could. ...and then I moved to global health applied research and realised that the data trustworthiness situation was so much worse I had to recalibrate a lot of expectations / intuitions.
Disease burden estimates, such as child mortality rates, are a key input in our cost-effectiveness analyses. Historically, for consistency and convenience, we've primarily relied on a single source for these estimates.
Going forward, we plan to consider multiple sources for burden estimates, apply a higher level of scrutiny to these estimates, and adjust for potential biases or inaccuracies, like we do when estimating other parameters in our models.
This change has already led to us making over $25m in additional grants we would not have otherwise. (Footnote: Our updated estimates of malaria burden in Chad have led us to allocate $3.3 million in grantmaking for seasonal malaria chemoprevention (more), and $25.9m for insecticide-treated nets (not yet published).) We expect to consider additional research to improve estimates of burden of disease in the future.
The rest of the note was cathartic to skim-read. For instance, when I looked into the idea of distributing low-cost glasses to correct presbyopia in low-income countries awhile back (a problem that afflicts over 1.8 billion people globally with >$50 billion in annual lost potential productivity annually in LMICs alone), the industry data analyst in me was dismayed to learn that the WHO didn't even collect data on how many people needed glasses prior to 2008, so governments and associated stakeholders understandably prioritised allocation of resources towards surgical and medical interventions instead. I think the existence of orgs like IHME and OWID greatly improve the GHD data situation nowadays, but there are many "pockets" where it remains a far cry from what it could be, so I appreciated that GiveWell said they're considering
Fund data collection. This includes potentially funding additional nationally representative surveys (DHS/MIS/MICS) or additional modules to these surveys, or supporting more autopsy data collection to better understand cause-specific mortality, particularly for malaria in sub-Saharan Africa. Our guess is that part of the reason different models disagree is that the data underlying these models is limited. We may look for cases where we could fund additional data collection to improve burden of disease estimates.
Another example: a fair bit of my earlier analyst work involved either reconciling discrepant figures for ostensibly similar metrics (e.g. campaign revenue breakdowns etc) or root-cause analysing-via-data-plumbing whether a flagged metric needed to be acted on or was a false positive, which made me appreciate this section:
Key uncertainties: ...
There are likely technical nuances we haven't captured. We've found that comparisons between sources are more complex than they first appear. For example, we recently learned that IGME and IHME define diarrheal diseases differently. Similar technical differences likely exist elsewhere.
Possible next steps:
Get a better understanding of whatâs driving differences in models. This may come from bringing together modeling groups in regions with high disagreement to understand methodological differences.
Look for ways to improve model transparency. Weâve found it difficult to engage with burden of disease models, and think that finding ways to see inside the black box of how they produce estimates may make it easier to understand which estimates to rely on and how to improve them.
This is fantastic to hear! The Global burden of disease process (while the best and most reputable we have) is surprisingly opaque and hard to follow in many cases. I haven't been able to find the spreadsheets with their calculations.
Their numbers are usually reasonable but bewildering in some cases and obviously wrong in others. GiveWell moving towards combining GBD with other sensible models is a great way forward.
Its a bit unfortunate that the best burden of disease models we have aren't more understandable.
Thanks for the interesting summary of campus activism! A few questions that came to mind while reading this:
Brand consistency seems to be mentioned several times in relation to movement building. If the campus org is mostly seen by students at that particular university, what is the importance of consistent branding between campuses?
Given that several orgs (GFI, ALDF, VO, ASAP) are already in the space, yet activism seems quite limited, do you have concrete recommendations for how approaches should change? For example, should orgs try to focus on figuring out a few high quality university groups before scaling?
My two cents is that "brand consistency" is interesting, because brands reflect, roughly, the strain of vegan club that it is, whether associated with particular activist networks, whether it's more vegetarian than vegan or something else. The level of inconsistency is also indicative of a lack of coordination across groups.
My experience in university was that the local club was a bit of an awkward merge between a social club and people with a particular activist agenda (very visible demonstrations against animal labs). In a sense, the career building approach of Alt Protein Projects or the cause agnosticism of EA groups may be better at attracting members. But I'm not sure.
So glad Consequentialism is out, and we can finally follow our feelings!! it feels so dang good. I love being a human with feelings. All these years denying it to follow mathematical algorithms like a robot was tiring. Feelings are super effective!! I suppose we'll need a new introduction course where we explain to smart people what feelings are, and how they are already included and fully installed in us but we just need to put a check mark in that one box to turn them on, and boom when you do that suddenly you see a whole new world and there's lots of art everywhere too. Finally EA will have some art, coz us feeling humans of course demand it.
This post made my day (to be fair, it's only 7:40am, but whatever, I doubt anything else can put such a big smile on my face in the remaining 15+ hours).
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
Wow again I just haven't moved in circles where this would even be considered. Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue. Although we all have a "public profile" in theory, very few people are famous/powerful enough for it to count.
I don't think I believe in a public/private disconnect but I'll think about it some more. I believe in integrity and honesty in most situations, especially when your are publicly disparaging a movement. If you have chosen to lie and smear a movement with"My impression is that it's a bit of an outdated term" then I think this makes what you say a bit more fair game than for other statements where you aren't low-key attacking a group of well meaning people.
Endnote 2: Why canât we just defer to existing experts, instead of figuring stuff out for ourselves?
Alternative & complementary response: which experts? Why them, instead of these other experts who disagree with the former? How can you tell if you're (say) being misled? To quote John Wentworth:
When non-experts cannot distinguish true expertise from noise, money cannot buy expertise. Knowledge cannot be outsourced; we must understand things ourselves. ...
King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer - the knowledge that cowpox could safely immunize against smallpox. There were thousands of humoral experts, faith healers, eastern spiritualists, and so forth who would claim to offer some protection against smallpox, and King Louis XV could not distinguish the real solution.
John also suggests that the kind of deep model you want to build is gears-level models (that link has a lot of examples across various domains):
If I want to build long-term knowledge-wealth, then the analogy between money-wealth and knowledge-wealth suggests an interesting question: what does a knowledge âinvestmentâ look like? What is a capital asset of knowledge, an investment which pays dividends in more knowledge?
Enter gears-level models.
Mapping out the internal workings of a system takes a lot of up-front work. Itâs much easier to try random molecules and see if they cure cancer, than to map out all the internal signals and cells and interactions which cause cancer. But the latter is a capital investment: once weâve nailed down one gear in the model, one signal or one mutation or one cell-state, that informs all of our future tests and model-building. If we find that Y mediates the effect of X on Z, then our future studies of the Y-Z interaction can safely ignore X. On the other hand, if we test a random molecule and find that it doesnât cure cancer, then that tells us little-to-nothing; that knowledge does not yield dividends.
John has some advice on how to read papers to build gears-level models, although for most situations I prefer Sarah Constantin's advice to do fact-posting.
Perhaps most immediately jarring is the recommendation to add olive oil to smoothiesâa culinary choice that defies both conventional wisdom and basic palatability.
I've tried putting olive oil in smoothie-adjacent concoctions (calling the things I've made "smoothies" would be an insult to smoothies) and it always makes me nauseous.
One time, due to poor planning, the only thing I had available to eat all day was an olive-oil-based smoothie-adjacent beverage, and I still couldn't manage to choke it down.
That's interesting I think I might move in different circles. Most people I know would not really understand the concept of there being a PR world where your present different things from your personal life
Perhaps you move in more corporate or higher flying circles where this kind of disconnect is normal and where its fine to have a public/private communication disconnect which is considered rude to challenge? Interesting!
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
Thank you! Agreed that EA as a community often overlooks the value of protests and social change. Excited to look more deeply into the report
On âbackfireâ - do you have any view on backfire of BLM protests? Iâve been concerned with the pattern of protest -> police stop enforcing in a neighborhood -> murder rates go up. Seems like if this does happen, it really raises the bar as the long run positive effects protests like this need to achieve in order to offset the medium term murder increase.
But maybe Iâm thinking of this wrong. Or maybe this wouldnât be considered backfire - more of an unintended side effect?
On âbackfireâ - do you have any view on backfire of BLM protests? Iâve been concerned with the pattern of protest -> police stop enforcing in a neighborhood -> murder rates go up.
I wouldn't consider this a "backfire", although murder rates going up is definitely a bad thing. In the context of protests, a backfire isn't when anything bad happens, it's when the protests hurt the protesters' goals. If "police stop enforcing in a neighborhood" is a goal of BLM protests (which it basically is), then this is a success, not a backfire, and the increase in murder rate is an unfortunate consequence.
A backfire effect would be something like: protest -> protests make people feel unsafe -> city allocates more funding to the police.
Thank you for this post. I think it does a great job of outlining the double-edged sword we're facing - - the potential for AI to either end enormous suffering or amplify it exponentially.
Your suggestion to reframe our movement's goal really expanded my thinking: "ensure that advanced AI and the people who control it are aligned with animals' interests by 2030." This feels urgent and necessary given the timelines you've outlined.
I'm particularly concerned that our society's current commodified view of animals could be baked into AGI systems and scaled to unprecedented levels.
The strategic targets you've identified make perfect sense - especially the focus on AI/animal collaborations and getting animal advocates into rooms where AGI decisions are being made. We should absolutely be leveraging AI-powered advocacy tools while we can still shape their development.
Thank you for this clarity. I'll be thinking much more deeply about how my own advocacy work needs to adapt to this possible near-future scenario.
I understand why people shy away from/hide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.
When PG refers to keeping your identity small, he means don't defend it or its characteristics for the sake of it. There's nothing wrong with being a C/C++ programmer, but realizing it's not the best for rapid development needs or memory safety. In this case, you can own being an EA/your affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.
FWIW, I appreciated reading this :) Thank you for sharing it!
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.
I so agree! I think there is something virtuous and collaborative for those of us who have benefited from EA and its ideas / community to just... being willing to stand up and say simply that. I think these ideas are worth fighting for.
Comments on 2025-04-04
Erich_Grunewald đ¸ @ 2025-04-04T12:18 (+13) in response to Third-wave AI safety needs sociopolitical thinking
This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employment rate gap:
This report from Banque de France has it that "the EU-US gap has narrowed in terms of hours worked per capita but has widened in terms of GDP per hours worked", and that in France at least this can be attributed to "producers and heavy users of IT technologies":
The Draghi report says 72% of the EU-US GDP per capita gap is due to productivity, and only 28% is due to labour hours:
Part of the discrepancy may be that the OWID data only goes until 2019, whereas some of these other sources report that the gap has widened significantly since COVID? But that doesn't seem to be the case in the first plot above (it still shows a widening gap before COVID).
Or maybe most of the difference is due to comparing the US to France/Germany, versus also including countries like Greece and Italy that have seen much slower productivity growth. But that doesn't explain the France data above (it still shows a gap between France and the US, even before COVID).
AGB đ¸ @ 2025-04-04T18:52 (+5)
Thanks for this. I already had some sense that historical productivity data varied, but this prompted me to look at how large those differences are and they are bigger than I realised. I made an edit to my original comment.
TL;DR: Current productivity people mostly agree about. Historical productivity they do not. Some sources, including those in the previous comment, think Germany was more productive than the US in the past, which makes being less productive now more damning compared to a perspective where this has always been the case.
***
For simplicity I'm going to focus on US vs. Germany in the first three bullets:
****
Where does that leave the conversation about European regulation? This is just my $0.02, but:
In my opinion the large divergences of opinion about the 90s, while academically interesting, are only indirectly relevant to the situation today. The situation today seems broadly accepted to be as follows:
I think that when Americans think about European regulations, they are mostly thinking about the Western and Northern countries. For example, when I ask Claude which EU countries have the strongest labour rights, the list of countries it gives me is entirely a subset of those countries. But unless you think replacing those regulations with US-style regulations would allow German productivity to significantly exceed US productivity, any claim that this would close the GDP per capita gap between the US and Germany - around 1.2x - without more hours being worked is not very reasonable. Let alone the GDP gap, which layers on the US' higher population growth.
Digging into Southern Europe and figuring out why e.g. Italy and Germany have failed to converge seems a lot more reasonable. Maybe regulation is part of that story. I don't know.
So I land pretty much where the Economist article is, which is why I quoted it:
I am eyeballing at page 66 and adding together 'TFP' and 'capital deepening' factors. I think that amounts to labour productivity, and indeed the report does say "labour productivity...ie the product of TFP and capital deepening". Less confident about this than the other figures though.
Unhelpfully, the data is displayed as % of 2015 productivity. I'm getting my claim from (a) OECD putting German 1995 productivity at 80% of 2015 levels, vs. the US being at 70% of 2015 levels and (b) 2022 productivity being 107% vs. 106% of 2022 levels. Given the OECD has 2022 US/German productivity virtually identical, I think the forced implication is that they think German productivity was >10% higher in 1995.
Vasco Grilođ¸ @ 2025-04-04T14:40 (+2) in response to What I learned from a week in the EU policy bubble
Thanks for the great clarifications, Lauren! Strongly upvoted.
Interesting example! I would be interested to know more, but I understand it may be sensible information to share publicly. I think one can help 400 M shrimp donating 26.7 k$ (= 400*10^6/(15*10^3)) to the Shrimp Welfare Project (SWP). So, if your example was representative of the impact of a career in policy inside the system, and the impact per animal helped in your example matched that of SWP (which I estimated to be 0.0426 DALYs averted), maximising donations could still be better. For a career of 40 years, one would only need to donate 668 $ (= 26.7*10^3/40) more to SWP per year relative to the career in policy inside the system.
lauren_mee @ 2025-04-04T18:50 (+1)
Will reply properly later
Sarah Cheng @ 2025-04-04T15:30 (+4) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
To quickly add on to what Toby wrote: the CEA Online Team has also been redesigning effectivealtruism.org and we expect to soft launch it soon. I post quick takes when we update our half-quarterly plans, so you can follow along there. :)
AnonymousEAForumAccount @ 2025-04-04T18:31 (+2)
Thanks Sarah, good to know!
Toby Tremlettđš @ 2025-04-04T08:37 (+9) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
Hey! I'm the current staff-member working on the EA Newsletter - and I'm currently working on the EA Newsletter improvement project we didn't have time for before. So far this has been:
The next step is more seriously thinking about marketing, considering advertising it, integrating it more with other CEA touchpoints etc... Stay tuned.
Also, I always welcome any suggestions for low-hanging fruit in Newsletter marketing (I'm sure there is a lot of this), as well as general feedback on the Newsletter itself.
AnonymousEAForumAccount @ 2025-04-04T18:30 (+2)
Thanks for the reply Toby! These seem like great steps to be taking, and Iâm glad theyâre in the works.
Since you ask about suggestions, here are some other things Iâd be looking at if I were in your shoes.
Ozzie Gooen @ 2025-04-04T18:29 (+2) in response to Contact us
Quick thoughts on the AI summaries:
1. Does the EA Forum support <details> / <summary> blocks, for hidden content? If so, I think that should heavily be used in these summaries.
2. If (1) is done, then I'd like sections like:
- related materials
- key potential counter-claims
- basic evaluations, using some table.
Then, it would be neat if the full prompt for this was online, and maybe if there could be discussion about it.
Of course, even better would be systems where these summaries could be individualized or something, but that would be more expensive.
SummaryBot @ 2025-04-04T18:17 (+1) in response to How I shared Effective Altruism with my friends
Executive summary: The author shares how they introduced Effective Altruism (EA) to friends unfamiliar with the movement by explaining its core ideas, personal impact, and diverse community, encouraging more open conversations and engagement with EA.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2025-04-04T18:16 (+1) in response to Advice on Advice: A Framework For Evaluating Advice
Executive summary: This post offers a practical framework for critically evaluating advice by assessing the advice giverâs awareness, experience, and intention, especially when navigating uncertainty or crises where poor advice can have outsized negative consequences.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Kevin Kuruc @ 2025-04-04T17:39 (+13) in response to Three Organizers Walk Into a Liberal Arts College
I have no immediate or useful feedback on this specific question, but just wanted to say that I'll be starting as an economics professor in the Fall at Middlebury. I'd be excited to meet and engage with y'all! If any of your identified bottlenecks are something a faculty member would be able to help with, keep me in mind :)
Niel_Bowerman @ 2025-04-04T17:27 (+12) in response to 80,000 Hours is shifting its strategic approach to focus more on AGI
What 80k programmes will be delivering in the near-term
In response to questions that we and CEA have received about how, and to what extent, our programme delivery will change as a result of our new strategic focus, we wanted to give a tentative indication of our programmeâs plans over the coming months.
The following is our current guess of what weâre going to be doing in the short term. Itâs quite zoomed in on the things that are or arenât changing as a result of our strategic update, rather than going into detail on: a) what things weâve decided not to prioritise, even though we think theyâd be valuable for others to work on; b) things which arenât affected by our strategy very much (such as our operations functions).
Itâs also written in the context of 80k still thinking through our plans â so weâre not able (or trying) to give a firm commitment of what weâll definitely do or not do. Despite our uncertainty, we thought itâd be useful to share the tentative plans that we have here â so that people considering what to work on or whether to recommend 80kâs resources have an idea what to expect from us.
~
To be clear, we think it's an unspeakable travesty that we live in a world where there is so much preventable suffering and death going unaddressed. The following is a concise statement of our priorities, but should not be taken as an indication that we think itâs anything other than a tragedy that so much triage is needed.
We would love it if our programmes could continue to deliver resources focusing on a wider breadth of impactful cause areas, but we think unfortunately the situation with AI is severe and urgent enough that we need to prioritise using our capacity to help with it.
In writing this, we hope that we can help others to figure out where the gaps left by 80k are likely to be, so that they are easier to fill â and to also understand how 80k might still be useful to them / their groups.
~
Web
Podcast
Advising
Job board
Headhunting
Video
JWS đ¸ @ 2025-04-02T12:12 (+4) in response to Third-wave AI safety needs sociopolitical thinking
Hey Cullen, thanks for responding! So I think there are object-level and meta-level thoughts here, and I was just using Jeremy as a stand-in for the polarisation of Open Source vs AI Safety more generally.
Object Level - I don't want to spend too long here as it's not the direct focus of Richard's OP. Some points:
Meta Level -
Again, not saying that this is referring to you in particular
Cullen đ¸ @ 2025-04-04T16:58 (+2)
Thanks for this very thoughtful reply!
I have a lot to say about this, much of which boils down to a two points:
The rest of your comment I agree with.
I realize that point (1) may seem like nitpicking, and that I am also emotionally invested in it for various reasons. But this is all in the spirit of something like avoiding reasoning from fictional evidence: if we want to have a good discussion of avoiding unnecessary polarization, we should reason from clear examples of it. If Jeremy is not a good example of it, we should not use him as a stand-in.
Right, this is in large part where our disagreement is: whether Jeremy is good evidence for or an example of unnecessary polarization. I just simply donât think that Jeremy is a good example of where there has been unnecessary (more on this below) polarization because I think that he, explicitly and somewhat understanably, just finds the idea of approval regulation for frontier AI abhorrent. So to use Jeremy as evidence or example of unnecessary polarization, we have to ask what he was reacting to, and whether something unnecessary was done to polarize him against us.
Dislightenment âstarted out as a red team reviewâ of FAIR, and FAIR is the most commonly referenced policy proposal in the piece, so I think that Jeremyâs reaction in Dislightenment is best understood as, primarily, a reaction to FAIR. (More generally, I donât know what else he would have been reacting to, because in my mind FAIR was fairly catalytic in this whole debate, though itâs possible Iâm overestimating its importance. And in any case I wasnât on Twitter at the time so may lack important context that heâs importing into the conversation.) In which case, in order to support your general claim about unnecessary polarization, we would need to ask whether FAIR did unnecessary things polarize him.
Which brings us to the question of what exactly unnecessary polarization means. My sense is that avoiding unnecessary polarization would, in practice, mean that policy researchers write and speak extremely defensively to avoid making any unnecessary enemies. This would entail falsifying not just their own personal beliefs about optimal policy, but also, crucially, falsifying their prediction about what optimal policy is from the set of preferences that the public already holds. It would lead to writing positive proposals shot through with diligent and pervasive reputation management, leading to a lot of unnecessary and confusing hedges and disjunctive asides. I think pieces like that can be good, but it would be very bad if every piece was like that.
Instead, I think it is reasonable and preferable for discourse to unfold like this: Policy researchers write politely about the things that they think are true, explain their reasoning, acknowledge limitations and uncertainties, and invite further discussion. People like Jeremy then enter the conversation, bringing a useful different perspective, which is exactly what happened here. And then we can update policy proposals over time, to give more or less weight to different considerations in light of new arguments, political evidence (what do people think is riskier: too much centralization or too much decentralization?) and technical evidence. And then maybe eventually there is enough consensus to overcome the vetocratic inertia of our political system and make new policy. Or maybe a consensus is reached that this is not necessary. Or maybe no consensus is ever reached, in which case the default is nothing happens.
Contrast this with what I think the âreduce unnecessary polarizationâ approach would tend to recommend, which is something closer to starting the conversation with an attempt at a compromise position. It is sometimes useful to do this. But I think that, in terms of actual truth discovery, laying out the full case for oneâs own perspective is productive and necessary. Without full-throated policy proposals, policy will tend too much either towards an unprincipled centrism (wherein all perspectives are seen as equally valid and therefore worthy of compromise) or towards the perspectives of those who defect from the âstart at compromiseâ policy. When the stakes are really high, this seems bad.
To be clear, I donât think youâre advocating for this "compromise-only" position. But in the case of Jeremy and Dislightenment specifically, I think this is what it would have taken to avoid polarization (and I doubt even that would have worked): writing FAIR with a much mushier, âwhoâs to say?â perspective.
In retrospect, I think itâs perfectly reasonable to think that we should have talked about centralization concerns more in FAIR. In fact, I endorse that proposition. And of course it was in some sense unnecessary to write it with the exact discussion of centralization that we did. But I nevertheless do not think that we can be said to have caused Jeremy to unnecessarily polarize against us, because I think him polarizing against us on the basis of FAIR is in fact not reasonable.
I disagree with this as a textual matter. Here are some excerpts from Dislightenment (emphases added):
He fairly consistently paints FAIR (or licensing more generally, which is a core part of FAIR) as the main policy he is responding to.
It is definitely fair for him to think that we should have talked about decentralization more! But I donât think itâs reasonable for him to polarize against us on that basis. That seems like the crux of the issue.
Jeremyâs reaction is most sympathetic if you model the FAIR authors specifically or the TAI governance community more broadly as a group of people totally unsympathetic to distribution of power concerns. The problem is that that is not true. My first main publication in this space was on the risk of excessively centralized power from AGI; another lead FAIR coauthor was on that paper too. Other coauthors have also written about this issue: e.g., 1; 2; 3 at 46â48; 4; 5; 6. Itâs a very central worry in the field, dating back to the first research agenda. So I really donât think polarization against us on the grounds that we have failed to give centralization concerns a fair shake is reasonable.
I think the actual explanation is that Jeremy and the group of which he is representative have a very strong prior in favor of open-sourcing things, and find it morally outrageous to propose restrictions thereon. While I think a prior in favor of OS is reasonable (and indeed correct), I do not think it reasonable for them to polarize against people who think there should be exceptions to the right to OS things. I think that it generally stems from an improper attachment to a specific method of distributing power without really thinking through the limits of that justification, or acknowledging that there even could be such limits.
You can see this dynamic at work very explicitly with Jeremy. In the seminar you mention, we tried to push Jeremy on whether, if a certain AI system turns out to be more like an atom bomb and less like voting, he would still think it's good to open-source it. His response was that AI is not like an atomic bomb.
Again, a perfectly fine proposition to hold on its own. But it completely fails to either: (a) consider what the right policy would be if he is wrong, (b) acknowledge that there is substantial uncertainty or disagreement about whether any given AI system will be more bomb-like or voting-like.
I agree! But I guess Iâm not sure where the room for Jeremyâs unnecessary polarization comes in here. Do reasonable people get polarized against reasonable takes? No.
I know you're not necessarily saying that FAIR was an example of unnecessary polarizing discourse. But my claim is either (a) FAIR was in fact unnecessarily polarizing, or (b) Jeremy's reaction is not good evidence of unnecessary polarization, because it was a reaction to FAIR.
I think all of the opinions of his we're discussing are from July 23? Am I missing something?
A perfectly reasonable opinion! But one thing that is not evident from the recording is that Jeremy showed up something like 10-20 minutes into the webinar, and so in fact missed a large portion of our presentation. Again, I think this is more consistent with some story other than unnecessary polarization. I don't think any reasonable panelist would think it appropriate to participate in a panel where they missed the presentation of the other panelists, though maybe he had some good excuse.
defun đ¸ @ 2025-04-04T15:46 (+8) in response to defun's Quick takes
I'd love to see Joey Savoie on Dwarkeshâs podcast. Can someone make it happen?
Joey with Spencer Greenberg: https://podcast.clearerthinking.org/episode/154/joey-savoie-should-you-become-a-charity-entrepreneur/
Dee Tomic @ 2025-04-01T23:21 (+12) in response to Open thread: April - June 2025
Hi EAs, Iâm Dee, first-time forum poster but long-time advocate for EA principles since first discovering the movement through Peter Singerâs work. Iâve always had a particular interest in global health and wellbeing, which initially inspired me to complete a medical degree. While I enjoyed my studies, I became somewhat disheartened with the scope of impact I could have as a single doctor in a system largely geared towards treatment rather than prevention of disease. After a career pivot to management consulting for a couple of years, I eventually completed my PhD in epidemiology. Iâm now using my research experience and medical knowledge to tackle complex public health problems.
The more Iâve solidified my own goals to do good, including through my career as well as through giving to effective causes, Iâve sought to further engage with EA content and the community. I look forward to connecting and sharing ideas with you all!
MichaelDickens @ 2025-04-04T15:33 (+4)
Epidemiology! I hadn't really thought about epidemiology as a career but it strikes me as potentially very high impact, especially if you're going into it with an attention to impact. My basic thinking is that the field of health tends to have some of the lowest-hanging fruit in terms of improving people's lives, and epidemiology can have a leveraged impact by benefiting many people simultaneously (which is also why being a doctor is maybe less goodâthe number of people you can help is much smaller).
If you have thoughts, I am interested in what you think about where are the big problems in epidemiology, or at least where are the big problems that you personally can contribute to. It's not a space I know much about. (You did say the problems are complex which seems true to me so I don't think I am really in a position to understand epidemiology lol.)
AnonymousEAForumAccount @ 2025-04-03T22:48 (+8) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Sarah Cheng @ 2025-04-04T15:30 (+4)
To quickly add on to what Toby wrote: the CEA Online Team has also been redesigning effectivealtruism.org and we expect to soft launch it soon. I post quick takes when we update our half-quarterly plans, so you can follow along there. :)
MichaelDickens @ 2025-04-04T15:03 (+2) in response to The AI Adoption Gap: Preparing the US Government for Advanced AI
Thank you for this article. I've read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I've read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you're stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you're not obligated to explain anything to me or to respond at all, I'm just writing this because I think it's generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changesâoff the top of my head:
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on "win-wins" but doesn't actually say how we can avoid the downsides (or, if it did, I didn't get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it's not clear to me whether that's true.
For what it's worth, I don't think government competence is what's holding us back from having good AI regulations, it's government willingness. I don't see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won't want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was "Human disempowerment by advanced AI", which IMO is an overly euphemistic way of saying "AI will kill everyone".
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
There are some recommendations in this article that I like, and if I think it should focus much more on them:
I also liked the section "Government adoption of AI will need to manage important risks" and I think it should have been emphasized more instead of buried in the middle.
Some line item responses
I don't really know how to organize this so I'm just going to write a list of lines that stood out to me.
What does that mean exactly? I can't think of how you could do that without shortening timelines so I don't know what you have in mind here.
I also don't understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
This sounds plausible but I am not convinced that it's true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
I don't think that's how that works. Government gets to make laws. Frontier AI companies don't get to make laws. This is only true if you're talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that's what you're talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn't it do the opposite?
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it's saying that's not a good framing? So why did the article use that framing? I get the sense that you didn't intend to use that framing, but it comes across as if you're using it.
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don't think "timelines" are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I'm not saying you're wrong, I'm just saying you can't take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
(this isn't a disagreement, just a comment:)
You don't say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says "Itâs very unclear whether centralizing would be good or bad", but you're citing it as if it definitively finds centralization to be bad.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn't really matter whether the US government is using AI internally.
I don't think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don't exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the markâright now the priority should be to get any sort of safety regulations at all.
I am moderately bullish on this idea (I've spoken favorably about Sentinel before) although I don't actually have a good sense of when it would be useful. I'd like to see more projection of under exactly what sort of scenarios "emergency capacity" would be able to prevent catastrophes. Not that that's within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn't seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Swan đ¸ @ 2025-04-04T14:51 (+11) in response to Swan 's Quick takes
Is anyone working on an updated version of the biosecurity map? I helped make biosecurity.world and would be happy to help/ mentor someone interested in doing this. Please comment or DM me.
lauren_mee @ 2025-04-04T11:45 (+16) in response to What I learned from a week in the EU policy bubble
Thanks Vasco â I really appreciate the thoughtful engagement. I think there are a few different things getting a bit mixed together here, so Iâd love to tease them apart and explain where I still see things differently.
You mentioned that the key is the difference in impact, not concern about animals. But Iâd argue that this concern does in fact translate to impact, especially when weâre thinking in terms of counterfactuals and replaceability. For example, if someone applies for a role at SWP, their counterfactual impact is likely just the difference between them and the next-best candidateâwho is almost certainly also deeply concerned about shrimp welfare. But in an EC role, the counterfactual is likely that the position goes to someone who wouldnât raise animal issues at all. So the marginal impact is potentially much greater, even in junior positions.
Weâve already seen specific examples, particularly in the UK, where junior staff inside government have been able to push for progress on animal welfare that would never have happened through lobbying alone. These arenât abstract hypotheticals. Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
2. Hierarchy matters, but so does initiative, positioning, and timing.
Yes, the Commission is large and hierarchical. But so is almost every institution with leverage over major policy. What weâve seen that once someone is in, they can navigate toward departments and roles where theyâre better positioned to influence change. Thatâs part of what this program is about: helping people enter the system with the long game in mind.
Itâs not a passive processâit requires individuals to actively find their leverage points and pockets of influence. A lot depends on the individualâs initiative and ability to spot opportunitiesâbut thatâs true in any sector, whether in NGOs or in policy. I would say though if that doesnt appeal its a sign working in civil service is not a good fit.
You noted that lobbyists can reach many policymakers, which is true. But that doesnât mean theyâre more impactful than internal actorsâitâs highly dependent on context. And critically, lobbyists themselves will tell you (and did on our programme) that what they need most are credible insiders who understand the system, have networks, and can champion ideas from within.
3. External lobbying vs. insider influence is a false binary.
We often hear people argue for becoming a lobbyist instead of going into the system. But I think this skips a vital step: the most effective lobbyists often were insiders first. Without that institutional knowledge, they lack the credibility and relational capital that drives real traction on issues that arenât already politically salientâlike shrimp welfare, for example.
So to me, the idea that someone without any government experience should just jump into policy advocacy seems less plausible than a pathway that starts inside the system, builds knowledge, and later leverages that from a lobbying or NGO position if thatâs where personal fit leads.
So overall, Iâd say the value of this programme comes not from comparing against some hypothetical ârandomâ NGO role, but from offering people a realistic path into a system thatâs historically been quite closed off to animal advocates and an opportunity to build essential career capital to be a more effective advocate in the future.
Vasco Grilođ¸ @ 2025-04-04T14:40 (+2)
Thanks for the great clarifications, Lauren! Strongly upvoted.
Interesting example! I would be interested to know more, but I understand it may be sensible information to share publicly. I think one can help 400 M shrimp donating 26.7 k$ (= 400*10^6/(15*10^3)) to the Shrimp Welfare Project (SWP). So, if your example was representative of the impact of a career in policy inside the system, and the impact per animal helped in your example matched that of SWP (which I estimated to be 0.0426 DALYs averted), maximising donations could still be better. For a career of 40 years, one would only need to donate 668 $ (= 26.7*10^3/40) more to SWP per year relative to the career in policy inside the system.
Jeroen De Ryck đš @ 2025-04-04T04:20 (+3) in response to Jeroen De Ryck's Quick takes
I'm glad to see that the EA Forum Team implemented clear and obviously noticeable tags for April Fools' Day posts. It shows they listen to feedback!
Toby Tremlettđš @ 2025-04-04T14:34 (+2)
Thanks for giving feedback! I looked at this particular quick take again before April Fool's to make sure we'd fixed the issue. Thanks to @JP Addisonđ¸ for writing the code to make the tags visible.
Vasco Grilođ¸ @ 2025-04-04T13:49 (+2) in response to Cost-effectiveness of Veganuary and School Plates
Thanks for the comment, Sjlver!
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
Sjlver @ 2025-04-04T14:29 (+3)
Thanks for the response!
I understand that you are worried about chicken and fish consumption. I have no knowledge about why these charts are the way they are, or why people in the UK consume twice as much chicken as those in Germany. It's also difficult to guess the impact of Veganuary in these trends. Insofar, I find the charts a bit distracting.
What I intended to say with my comment is that Veganuary has clearly visible impacts around me: when I go shopping, when I see ads, when I eat out. This seems to correlate with a general trend of seeing more vegan products, brands, and menu choices. Maybe the general trend that I identified is similarly distracting as your chicken and fish charts... yet it does seem to be something that Veganuary directly works on and influences.
I suspect that you brought up the chicken and fish charts because you worry about shifts in consumption from larger animals to higher numbers of small animals. This is a real possibility, but I would be wary of accusing Veganuary to cause such a shift, without good evidence. I grant that Veganuary tries to appeal to a broad range of people with various reasons for reducing meat consumption, including climate reasons which might cause a shift away from ruminants. But I recall there was a lot of Veganuary content around animal welfare. Personally, Veganuary shifted my views to care more about animals.
Animal welfare seems to be the main participant motivation. Here's a figure from the 2023 survey report:
Taking a step back, it's a little sad that this article feels so hostile towards Veganuary, and shows Veganuary in a bad light primarily because of discounts and back-of-the-envelope numbers that seem quite arbitrary. I see a lot less competition than you do between Veganuary and work on shrimp welfare or cage-free campaigns. On the contrary, people who have participated in Veganuary are likely more receptive for that type of work, and this is a benefit that we won't find in CEAs ;-)
Sjlver @ 2025-04-04T09:30 (+5) in response to Cost-effectiveness of Veganuary and School Plates
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
Vasco Grilođ¸ @ 2025-04-04T13:49 (+2)
Thanks for the comment, Sjlver!
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
Sjlver @ 2025-04-04T09:30 (+5) in response to Cost-effectiveness of Veganuary and School Plates
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
Vasco Grilođ¸ @ 2025-04-04T13:47 (+2)
Thanks for the comment, Sjlver!
My cost-effectiveness estimate is supposed to be unbiased in the sense of not being too low or high in expectation.
To be clear, I think one single email or video can turn someone from omnivoure to vegan. However, I believe that is super far from the expected effect.
The supply per capita of poultry meat in Germany has not had a clear downwards trend, although it does seem like it has already peaked.
Likewise for the supply per capita of fish and other seafood in Germany.
However, this is very weak evidence of the impact of Veganuary. There are many factors which affect meat consumption in Germany besides Veganuary, and that may well be the country which Veganuary targets with the most positive trends. In the UK, the consumption per capita of poultry meat has been increasing, although that on fish and other seafood has recently been decreasing.
Nitpick. Dairy accounts for a very small fraction of animal suffering. I think decreases in its consumption only matter to the extent they predict decreases in the consumption of eggs, poultry birds, fish, or other seafood.
MathiasKBđ¸ @ 2025-04-04T10:23 (+10) in response to Launching Screwworm-Free Future â Funding and Support Request
No idea, it's probably worth reaching out to ask them and alert them in case they aren't already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I don't want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: "Thanks for highlighting this concern, we are very mindful of this throughout our work" which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I don't mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I don't currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
Vasco Grilođ¸ @ 2025-04-04T13:24 (+2)
Thanks for the comment, Mathias! I strongly upvoted it. I love the transparency. I emailed Mal Graham, WAI's strategy director, right after my comment.
Davidmanheim @ 2025-04-04T04:36 (+2) in response to Share AI Safety Ideas: Both Crazy and Not. â2
Yeah, you should talk to someone who knows more about security than myself, but as a couple starting points;
This is not a thing, and likely cannot he a thing. You can't prove an AI system isn't malign, and work that sounds like it says this is actually doing something very different.
You can't know that a given matrix multiplication won't be for an AI system. It's the same operation, so if you can buy or rent GPU time, how would it know what you are doing?
ank @ 2025-04-04T13:15 (+1)
Thank you, for your interest David! Math-proven safe AIs are possible, our group has just achieved it (our researcher writes under a pseudonym for safety reasons, please, ignore it): https://x.com/MelonUsks/status/1907929710027567542
Why it's math-proven safe? Because it's fully static, an LLM by itself is a giant static geometric shape in a file, only GPUs make it non-static, agentic. It's called place AI, it's a type of tool AI.
To address you second question, there is a way to know if a given matrix multiplication is for AI or not. In the cloud we'll have a math-proven safe AI model inside of each math-proven safe GPU: GPU hardware will be remade to be an isolated unit that just spits out output: images, text, etc. Each GPU is an isolated math-proven safe computer, the sole purpose of this GPU computer is safety and hardware+firmware isolation of the AI model from the outside world.
But the main priority is putting all the GPUs in the international scientists controlled clouds, they'll figure out the small details that are left to resolve. Almost all current GPUs (especially consumer ones) are 100% unprotected from the imminent AI agent botnet (think a computer virus but much worse), we can't switch off the whole Internet.
Please, refer to the link above for further information. Thank you for this conversation!
AGB đ¸ @ 2025-03-30T11:51 (+132) in response to Third-wave AI safety needs sociopolitical thinking
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
https://www.economist.com/graphic-detail/2023/10/04/productivity-has-grown-faster-in-western-europe-than-in-america
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
Erich_Grunewald đ¸ @ 2025-04-04T12:18 (+13)
This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employment rate gap:
This report from Banque de France has it that "the EU-US gap has narrowed in terms of hours worked per capita but has widened in terms of GDP per hours worked", and that in France at least this can be attributed to "producers and heavy users of IT technologies":
The Draghi report says 72% of the EU-US GDP per capita gap is due to productivity, and only 28% is due to labour hours:
Part of the discrepancy may be that the OWID data only goes until 2019, whereas some of these other sources report that the gap has widened significantly since COVID? But that doesn't seem to be the case in the first plot above (it still shows a widening gap before COVID).
Or maybe most of the difference is due to comparing the US to France/Germany, versus also including countries like Greece and Italy that have seen much slower productivity growth. But that doesn't explain the France data above (it still shows a gap between France and the US, even before COVID).
Vasco Grilođ¸ @ 2025-04-04T07:50 (+2) in response to What I learned from a week in the EU policy bubble
Thanks, David.
I understand this. However, the key is the difference in impact, not in concern about animals. I agree people completing the program care much more about animals than a random person in a junior position in EU's institutions, but my impression is that there is limited room for the greater care to translate into helping animals in junior positions. The Commission has 32 k people, whereas the largest organisation recommended by ACE, The Humane League (THL), has 136, so hierarchy matters much more in the former.
Makes sense. On the other hand, a lobbyist can interact with more policymakers than an APA. I do not know whether a lobbyist is more or less impactful than an APA. I think it depends on the specifics.
lauren_mee @ 2025-04-04T11:45 (+16)
Thanks Vasco â I really appreciate the thoughtful engagement. I think there are a few different things getting a bit mixed together here, so Iâd love to tease them apart and explain where I still see things differently.
You mentioned that the key is the difference in impact, not concern about animals. But Iâd argue that this concern does in fact translate to impact, especially when weâre thinking in terms of counterfactuals and replaceability. For example, if someone applies for a role at SWP, their counterfactual impact is likely just the difference between them and the next-best candidateâwho is almost certainly also deeply concerned about shrimp welfare. But in an EC role, the counterfactual is likely that the position goes to someone who wouldnât raise animal issues at all. So the marginal impact is potentially much greater, even in junior positions.
Weâve already seen specific examples, particularly in the UK, where junior staff inside government have been able to push for progress on animal welfare that would never have happened through lobbying alone. These arenât abstract hypotheticals. Another specific i found out yesterday, someone was able to pass something through their local gov that led to 400 million animals being spared that wasn't even on the radar before they entered. It seems extremely unlikely that this kind of leverage and counterfactual would be the case for the best vs. next best candidate in an NGO.
2. Hierarchy matters, but so does initiative, positioning, and timing.
Yes, the Commission is large and hierarchical. But so is almost every institution with leverage over major policy. What weâve seen that once someone is in, they can navigate toward departments and roles where theyâre better positioned to influence change. Thatâs part of what this program is about: helping people enter the system with the long game in mind.
Itâs not a passive processâit requires individuals to actively find their leverage points and pockets of influence. A lot depends on the individualâs initiative and ability to spot opportunitiesâbut thatâs true in any sector, whether in NGOs or in policy. I would say though if that doesnt appeal its a sign working in civil service is not a good fit.
You noted that lobbyists can reach many policymakers, which is true. But that doesnât mean theyâre more impactful than internal actorsâitâs highly dependent on context. And critically, lobbyists themselves will tell you (and did on our programme) that what they need most are credible insiders who understand the system, have networks, and can champion ideas from within.
3. External lobbying vs. insider influence is a false binary.
We often hear people argue for becoming a lobbyist instead of going into the system. But I think this skips a vital step: the most effective lobbyists often were insiders first. Without that institutional knowledge, they lack the credibility and relational capital that drives real traction on issues that arenât already politically salientâlike shrimp welfare, for example.
So to me, the idea that someone without any government experience should just jump into policy advocacy seems less plausible than a pathway that starts inside the system, builds knowledge, and later leverages that from a lobbying or NGO position if thatâs where personal fit leads.
So overall, Iâd say the value of this programme comes not from comparing against some hypothetical ârandomâ NGO role, but from offering people a realistic path into a system thatâs historically been quite closed off to animal advocates and an opportunity to build essential career capital to be a more effective advocate in the future.
Toby Tremlettđš @ 2025-04-04T11:07 (+16) in response to Toby Tremlett's Quick takes
You guys overused the button... so we're putting Bulby on bed rest for a bit.

Look at the poor guy:
Ronen Bar @ 2025-04-04T06:43 (+1) in response to AI Moral Alignment: The Most Important Goal of Our Generation
That is a great idea, thanks for all your remarks. I would be happy to hear more about your vision for this, will DM you, hope it is OK.
Beyond Singularity @ 2025-04-04T10:53 (+1)
Thanks for the comment, Ronen! Appreciate the feedback.
GideonF @ 2025-04-04T10:52 (+2) in response to Gideon Futerman's Quick takes
Are the annoying happy lightbulbs when you upvote something here to stay, or they just an April Fool's thing that haven't been removed yet?
Vasco Grilođ¸ @ 2025-04-04T08:22 (+2) in response to Launching Screwworm-Free Future â Funding and Support Request
Great to know! Do you know whether they will cover effects on screwworms, which I worry may make their eradication harmful? I think it is fine to pursue interventions which may be harmful to wild animals nearterm, but then it is important to learn from them to minimise harmful effects in the future.
MathiasKBđ¸ @ 2025-04-04T10:23 (+10)
No idea, it's probably worth reaching out to ask them and alert them in case they aren't already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.
I am not saying this to be a dick (I hope), but because I don't want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.
I think people are far too happy to give an answer like: "Thanks for highlighting this concern, we are very mindful of this throughout our work" which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.
I don't mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I don't currently consider it worthwhile to spend time investigating or even encouraging others to investigate.
NickLaing @ 2025-04-04T09:53 (+4) in response to Three Organizers Walk Into a Liberal Arts College
This is super encouraging - I'm impressed how you leaned into the areas where liberal arts students might already have a felt need and interest, both empathetic and smart.
1) Finding a meaninful job (apparently a big deal for Gen Z)
2) Diverse food options including vegetarian meals
No suggestions here unfortunately, at 38 I'm not sure what the youth are into ;).
Ozzie Gooen @ 2025-03-30T22:22 (+51) in response to Ozzie Gooen's Quick takes
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
SiobhanBall @ 2025-04-04T09:52 (+1)
I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically.
Sjlver @ 2025-04-04T09:30 (+5) in response to Cost-effectiveness of Veganuary and School Plates
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
Greg_Colbourn â¸ď¸ @ 2025-04-04T09:27 (0) in response to Anthropic is not being consistently candid about their connection to EA
Re Anthropic and (unpopular) parallels to FTX, just thinking that it's pretty remarkable that no one has brought up the fact that SBF, Caroline Ellison and FTX were major funders of Anthropic. Arguably Anthropic wouldn't be where they are today without their help! It's unfortunate the journalist didn't press them on this.
AnonymousEAForumAccount @ 2025-04-03T22:48 (+8) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Toby Tremlettđš @ 2025-04-04T08:37 (+9)
Hey! I'm the current staff-member working on the EA Newsletter - and I'm currently working on the EA Newsletter improvement project we didn't have time for before. So far this has been:
The next step is more seriously thinking about marketing, considering advertising it, integrating it more with other CEA touchpoints etc... Stay tuned.
Also, I always welcome any suggestions for low-hanging fruit in Newsletter marketing (I'm sure there is a lot of this), as well as general feedback on the Newsletter itself.
Vasco Grilođ¸ @ 2025-04-04T08:22 (+2) in response to Launching Screwworm-Free Future â Funding and Support Request
Great to know! Do you know whether they will cover effects on screwworms, which I worry may make their eradication harmful? I think it is fine to pursue interventions which may be harmful to wild animals nearterm, but then it is important to learn from them to minimise harmful effects in the future.
cb @ 2025-04-04T06:27 (+2) in response to The Bottleneck in AI Policy Isnât EthicsâItâs Implementation
Seems false, unless he's using "general agreement" and "foreseeable" in some very narrow sense?
Tristan D @ 2025-04-04T08:16 (+1)
I was also interested to follow this up. For the source of this claim he cites another article he has written 'Is it time for robot rights? Moral status in artificial entities' (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).
titotal @ 2025-04-04T08:08 (+5) in response to "Long" timelines to advanced AI have gotten crazy short
I feel like this should be caveated with a "long timelines have gotten short... within people the author knows about in tech circles".
I mean, just two months ago someone asked a room full of cutting edge computational physicists whether their job could be replaced by an AI soon, and the response was audible laughter and a reply of "not in our lifetimes".
On one side you could say that this discrepancy is because the computational physicists aren't as familiar with state of the art genAI, but on the flipside, you could point out that tech circles aren't familiar with state of the art physics, and are seriously underestimating the scale of task ahead of them.
AGB đ¸ @ 2025-03-30T11:51 (+132) in response to Third-wave AI safety needs sociopolitical thinking
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
https://www.economist.com/graphic-detail/2023/10/04/productivity-has-grown-faster-in-western-europe-than-in-america
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
David Mathersđ¸ @ 2025-04-04T08:07 (+3)
If productivity is so similar, how come the US is quite a bit richer per capita? Is that solely accounted for by workers working longer hours?
Ozzie Gooen @ 2025-03-30T22:22 (+51) in response to Ozzie Gooen's Quick takes
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
ASuchy @ 2025-04-04T07:56 (+3)
Thanks for writing this, this is also something I have been thinking about and you've expressed it more eloquently.
One thing I have thought might be useful is at times showing restraint with job titling. I've observed cases where people have had a title for example Director in a small org or growing org, and in a larger org this role might be a coordinator, lead, admin.
I've thought at times this doesn't necessarily set people up for long term career success as the logical career step in terms of skills and growth, or a career shift, often is associated with a lower sounding title. Which I think decreases motivation to take on these roles.
At the same time I have seen people, including myself, take a decrease in salary and title, in order to shift careers and move forward.
David T @ 2025-04-03T18:08 (+9) in response to What I learned from a week in the EU policy bubble
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
Vasco Grilođ¸ @ 2025-04-04T07:50 (+2)
Thanks, David.
I understand this. However, the key is the difference in impact, not in concern about animals. I agree people completing the program care much more about animals than a random person in a junior position in EU's institutions, but my impression is that there is limited room for the greater care to translate into helping animals in junior positions. The Commission has 32 k people, whereas the largest organisation recommended by ACE, The Humane League (THL), has 136, so hierarchy matters much more in the former.
Makes sense. On the other hand, a lobbyist can interact with more policymakers than an APA. I do not know whether a lobbyist is more or less impactful than an APA. I think it depends on the specifics.
Karen Singleton @ 2025-04-04T02:48 (+2) in response to How should we adapt animal advocacy to near-term AGI?
This post inspired me to complete the BlueDot Future of AI course! Thanks Max!
Sharing in case this is useful for others - online, 2hr course: https://course.bluedot.org/future-of-ai
Max Taylor @ 2025-04-04T07:17 (+1)
That's great to hear! BlueDot has been my main resource for getting to grips with AI. Please feel free to share any ideas that come up as you explore how this applies to your own advocacy :-)
Beyond Singularity @ 2025-04-02T16:33 (+2) in response to AI Moral Alignment: The Most Important Goal of Our Generation
This is a critically important and well-articulated post, thank you for defining and championing the Moral Alignment (MA) space. I strongly agree with the core arguments regarding its neglect compared to technical safety, the troubling paradox of purely human-centric alignment given our history, and the urgent need for a sentient-centric approach.
You rightly highlight Sam Altman's question: "to whose values do you align the system?" This underscores that solving MA isn't just a task for AI labs or experts, but requires much broader societal reflection and deliberation. If we aim to align AI with our best values, not just a reflection of our flawed past actions, we first need robust mechanisms to clarify and articulate those values collectively.
Building on your call for action, perhaps a vital complementary approach could be fostering this deliberation through a widespread network of accessible "Ethical-Moral Clubs" (or perhaps "Sentientist Ethics Hubs" to align even closer with your theme?) across diverse communities globally.
These clubs could serve a crucial dual purpose:
Such a grassroots network wouldn't replace the top-down efforts and research you advocate for, but could significantly support and strengthen the MA movement you envision. It could cultivate the informed public understanding, deliberation, and engagement necessary for sentient-centric AI to gain legitimacy and be implemented effectively and safely.
Ultimately, fostering collective ethical literacy and structured deliberation seems like a necessary foundation for ensuring AI aligns with the best of our values, benefiting all sentient beings. Thanks again for pushing this vital conversation forward.
Ronen Bar @ 2025-04-04T06:43 (+1)
That is a great idea, thanks for all your remarks. I would be happy to hear more about your vision for this, will DM you, hope it is OK.
VeryJerry @ 2025-04-03T22:53 (+1) in response to AI Moral Alignment: The Most Important Goal of Our Generation
I was just thinking about writing a post like this after listening to https://www.astralcodexten.com/p/introducing-ai-2027 and especially the end where they're talking about getting into blogging, and thinking about the massive blind spot Rationalists seem to have for sentientism. I'm particularly interested in ways to get involved and help push this cause forward. Especially as someone who frankly, feels pretty helpless with the mass scale of non-human suffering and mass amount of human apathy towards it, as well as the many flaws in the current animal rights movement.
Ronen Bar @ 2025-04-04T06:38 (+1)
I think creating content on these topics is very valuable, and I am happy to brainstorm other options. I will also do a post on possible interventions.
cb @ 2025-04-04T06:27 (+2) in response to The Bottleneck in AI Policy Isnât EthicsâItâs Implementation
Seems false, unless he's using "general agreement" and "foreseeable" in some very narrow sense?
ank @ 2025-04-02T13:52 (+1) in response to Share AI Safety Ideas: Both Crazy and Not. â2
Yes, the only realistic and planet-wide 100% safe solution is this: putting all the GPUs in safe cloud/s controlled by international scientists that only make math-proven safe AIs and only stream output to users.
Each user can use his GPU for free from the cloud on any device (even on phone), when the user doesn't use it, he can choose to earn money by letting others use his GPU.
You can do everything you do now, even buy or rent GPUs, all of them just will be cloud math-proven safe GPUs instead of physical. Because GPUs are nukes are we want no nukes or to put them deep underground in one place so they can be controlled by international scientists.
Computer viruses we still didn't 100% solve (my mom had an Android virus recently), even iPhone and Nintendo Switch got jailbroken almost instantly, there are companies jailbreak iPhones as a service. I think Google Docs never got jailbroken, and majorly hacked, it's a cloud service, so we need to base our AI and GPU security on this best example, we need to have all our GPUs in an internationally scientist controlled cloud.
Else we'll have any hacker write a virus (just to steal money) with an AI agent component, grab consumer GPUs like cup-cakes, AI agent can even become autonomous (and we know they become evil in major ways, want to have a tea party with Stalin and Hitler - there was a recent paper - if given an evil goal. Will anyone align AIs for hackers or hacker themself will do it perfectly (they won't) to make an AI agent just to steal money but be a slave and do nothing else bad?)
Davidmanheim @ 2025-04-04T04:36 (+2)
Yeah, you should talk to someone who knows more about security than myself, but as a couple starting points;
This is not a thing, and likely cannot he a thing. You can't prove an AI system isn't malign, and work that sounds like it says this is actually doing something very different.
You can't know that a given matrix multiplication won't be for an AI system. It's the same operation, so if you can buy or rent GPU time, how would it know what you are doing?
Jeroen De Ryck đš @ 2025-04-04T04:20 (+3) in response to Jeroen De Ryck's Quick takes
I'm glad to see that the EA Forum Team implemented clear and obviously noticeable tags for April Fools' Day posts. It shows they listen to feedback!
Pat Myron đ¸ @ 2025-04-04T03:31 (+1) in response to U.S. Egg Price Opportunity
Easter (April 20th this year) is another unique opportunity:
Likely less defensiveness addressing annual egg decorating/tossing/hiding/etc. than confronting daily diets
Karen Singleton @ 2025-04-03T02:29 (+7) in response to How should we adapt animal advocacy to near-term AGI?
Thank you for this post. I think it does a great job of outlining the double-edged sword we're facing - - the potential for AI to either end enormous suffering or amplify it exponentially.
Your suggestion to reframe our movement's goal really expanded my thinking: "ensure that advanced AI and the people who control it are aligned with animals' interests by 2030." This feels urgent and necessary given the timelines you've outlined.
I'm particularly concerned that our society's current commodified view of animals could be baked into AGI systems and scaled to unprecedented levels.
The strategic targets you've identified make perfect sense - especially the focus on AI/animal collaborations and getting animal advocates into rooms where AGI decisions are being made. We should absolutely be leveraging AI-powered advocacy tools while we can still shape their development.
Thank you for this clarity. I'll be thinking much more deeply about how my own advocacy work needs to adapt to this possible near-future scenario.
Karen Singleton @ 2025-04-04T02:48 (+2)
This post inspired me to complete the BlueDot Future of AI course! Thanks Max!
Sharing in case this is useful for others - online, 2hr course: https://course.bluedot.org/future-of-ai
Aaron Bergman @ 2025-04-04T02:29 (+14) in response to Aaron Bergman's Quick takes
~30 second ask: Please help @80000_Hours figure out who to partner with by sharing your list of Youtube subscriptions via this survey
Unfortunately this only works well on desktop, so if you're on a phone, consider sending this to yourself for later. Thanks!
Sam Anschell @ 2025-04-04T00:30 (+10) in response to Three Organizers Walk Into a Liberal Arts College
Woah, huge congratulations on getting 80 pledges! Thatâs a really incredible achievement - I hope you all feel proud :)
I would guess that established uni groups at big schools donât get 80 pledges per year; you might consider reaching out to GWWC (community@givingwhatwecan.org) to brainstorm how to make the most of this amazing momentum.
I donât have experience in student group organizing (not starting an EA group at my college is my biggest regret in life), but Iâd recommend looking into whether your campus career center is open to co-hosting events and working with students on applying to high-impact roles.
At the liberal arts school I went to, events hosted by the career center tended to be pretty well-attended. Plus, you can lean on the job boards from 80k, Probably Good, and Animal Advocacy Careers to direct students to real world opportunities.
Another idea is to look into whether you can teach a student forum about EA for college credit! It really lowers the bar for students to commit to weekly meetings/readings if they can substitute it for another class.
And if students in your club are ever interested in talking to someone about entry-level operations or grantmaking work, Iâm always excited to call!
Comments on 2025-04-03
Jason @ 2025-04-03T23:13 (+4) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
(your text seems to cut off at the end abruptly, suggesting a copy/paste error or the like)
AnonymousEAForumAccount @ 2025-04-03T23:48 (+2)
Thanks, edited to fix
Ben_Westđ¸ @ 2025-04-03T15:41 (+19) in response to Anthropic is not being consistently candid about their connection to EA
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
Rebecca @ 2025-04-03T23:44 (+6)
No one is critiquing Danielaâs personal life though, theyâre critiquing something about her public life (ie her voluntary public statements to journalists) for contradicting what sheâs said in her personal life. Compare this with a common reason people get cancelled where the critique is that thereâs something bad in their personal life, and people are disappointed that the personal life doesnât reflect the public persona- in this case itâs the other way around.
AnonymousEAForumAccount @ 2025-04-03T22:48 (+8) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Jason @ 2025-04-03T23:13 (+4)
(your text seems to cut off at the end abruptly, suggesting a copy/paste error or the like)
Ben_Westđ¸ @ 2025-04-03T03:14 (+16) in response to Anthropic is not being consistently candid about their connection to EA
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
Jason @ 2025-04-03T23:10 (+8)
I agree with this being weird / a low blow in general, but not in this particular case. The crux with your footnote may be that I see this as more than a continuum.
I think someone's interest in private communications becomes significantly weaker as they assume a position of great power over others, conditioned on the subject matter of the communication being a matter of meaningful public interest. Here, I think an AI executive's perspective on EA is a matter of significant public interest.
Second, I do not find a wedding website to be a particularly private form of communication compared to (e.g.) a private conversation with a romantic partner. Audience in the hundreds, no strong confidentiality commitment, no precautions to prevent public access.
The more power the individual has over others, the wider the scope of topics that are of legitimate public interest for the others to bring up and the narrower the scope of communications that citing would be a weird / low. So what applies to major corporate CEOs with significant influence over the future would not generally apply to most people.
Compare this to paparazzi, who hound celebrities (who do not possess CEO-level power) for material that is not of legitimate public interest, and often under circumstances in which society recognizes particularly strong privacy rights.
I'm reminded of the NBA basketball-team owner who made some racist basketball-related comments to his affair partner, who leaked them. My recollection is that people threw shade on the affair partner (who arguably betrayed his confidences), but few people complained about showering hundreds of millions of dollars worth of tax consequences on the owner by forcing the sale of his team against his will. Unlike comments to a medium-size audience on a website, the owner's comments were particularly private (to an intimate figure, 1:1, protected from non-consensual recording by criminal law).
VeryJerry @ 2025-04-03T22:53 (+1) in response to AI Moral Alignment: The Most Important Goal of Our Generation
I was just thinking about writing a post like this after listening to https://www.astralcodexten.com/p/introducing-ai-2027 and especially the end where they're talking about getting into blogging, and thinking about the massive blind spot Rationalists seem to have for sentientism. I'm particularly interested in ways to get involved and help push this cause forward. Especially as someone who frankly, feels pretty helpless with the mass scale of non-human suffering and mass amount of human apathy towards it, as well as the many flaws in the current animal rights movement.
AnonymousEAForumAccount @ 2025-04-03T22:48 (+8) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
Itâs great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things Iâll be looking for which would give me more confidence that this emphasis on growth will go well:
Holly Elmore â¸ď¸ đ¸ @ 2025-04-03T22:44 (+2) in response to What if I'm not open to feedback?
I did basically say that in this post lol https://forum.effectivealtruism.org/posts/tuSQBGgnoxvsXwXJ3/criticism-is-sanctified-in-ea-but-like-any-intervention
OllieBase @ 2025-03-31T15:09 (+14) in response to Stewardship: CEAâs 2025-26 strategy to reach and raise EAâs ceiling
We (the CEA Events Team) recently posted about how we cut costs for EA Global last year. That's a big contributing factor, and involved hiring someone (a production associate) to help us cut overall costs.
Oscar Howie @ 2025-04-03T19:26 (+8)
Staff costs are a relatively small proportion of our total spending, but the proportion increased in 2024 compared to 2023 (28% vs 21%).
Between 2021 and 2023, our total spending increased by 264% (from $6.9m to $25.1m), while our headcount increased only 40% (from 24 to 34), which meant we had insufficient capacity to improve the quality and cost-effectiveness of our programs. This informed our decision to make foundation-building our organizational priority in 2024, including both investing in hiring to increase our capacity and cutting non-staff costs, with the majority of savings (per Ollie's comment) being contributed by lower spending on events, especially EAG.
Vasco Grilođ¸ @ 2025-03-31T17:15 (0) in response to What I learned from a week in the EU policy bubble
I believe there are positions within the system which are more impactful than a random one in ACE's recommended charities. However, I think those are quite senior, and therefore super hard to get, especially for people wanting to go against the system in the sense of prioritising animal welfare much more.
I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions.
David T @ 2025-04-03T18:08 (+9)
The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.
Ben_Westđ¸ @ 2025-04-03T16:32 (+9) in response to Anthropic is not being consistently candid about their connection to EA
Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me.
There are trade-offs to everything đ¤ˇââď¸
NickLaing @ 2025-04-03T17:36 (+2)
100 percent man
Manuel Allgaier @ 2025-04-03T16:34 (+7) in response to Centre for Effective Altruism Is No Longer "Effective Altruism"-Related
I feel like mainstream people like EA until they understand the implications and are faced with their first trade-off for who to help. To keep them engaged, maybe the new CEA could skip the prioritization part and just focus on making people feel better about their initial cause.
David_Moss @ 2025-04-03T16:57 (+4)
Maybe a slogan could be "All altruism is effective"?
David_Moss @ 2025-04-02T10:49 (+23) in response to Centre for Effective Altruism Is No Longer "Effective Altruism"-Related
RP actually did some empirical testing on this and we concluded that people really like the name "Effective Altruism", but not the ideas, values or mission.
That's unfortunate. But I think it suggests there's scope for a new 'Centre for Effective Altruism' to push forward exciting new ideas that have more mainstream appeal, like raising awareness of the cause du jour, while the rebranded Center for ââââââââ continues to focus on all the unpopular stuff.
Manuel Allgaier @ 2025-04-03T16:34 (+7)
I feel like mainstream people like EA until they understand the implications and are faced with their first trade-off for who to help. To keep them engaged, maybe the new CEA could skip the prioritization part and just focus on making people feel better about their initial cause.
NickLaing @ 2025-04-03T15:57 (+4) in response to Anthropic is not being consistently candid about their connection to EA
That's interesting and I'm sad to hear about people declining jobs due those reasons. On the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique. I would imagine though there are a bunch of ways to avoid the "EA limelight" for many positions though, of course not public facing ones.
Slight quibble though I would consider "Jane Doe sucks for these reasons" an order of magnitude more objectionable than quoting a wedding website to make a point. Maybe wedding website are sacrosanct in a way in missing tho...
Ben_Westđ¸ @ 2025-04-03T16:32 (+9)
Yeah, this used to be my take but a few iterations of trying to hire for jobs which exclude shy awkward nerds from consideration when the EA candidate pool consists almost entirely of shy awkward nerds has made the cost of this approach quite salient to me.
There are trade-offs to everything đ¤ˇââď¸
SiobhanBall @ 2025-04-03T07:49 (+2) in response to Announcement: New Services for Capacity Building in Nonprofits
Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.
So this is a cool business of which I was previously unaware, so thanks for posting.
A key question that came to mind when reading your post and site was: whatâs stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way.
Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?
Deena Englander @ 2025-04-03T16:12 (+1)
Thank you! We're enjoying her :)
There's nothing stopping clients from going straight to EASE - that's part of why we make it publicly available: we want people to have easy access to qualified professionals. However, there are a few scenarios in which we can help:
So that's why we make the matchmaking service free. It's an easy way to provide value and make sure orgs get the right support.
I do hope that over time, we'll have enough trust from the community that our opinion will matter!
For any partners who work at similar organizations, their arrangement with their employers is their own affair; if they're working full time there, they're doing other work on the side (although I believe that the majority of the professionals have their own businesses).
AGB đ¸ @ 2025-03-30T11:51 (+132) in response to Third-wave AI safety needs sociopolitical thinking
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seems I may have rushed this point. See Erich's comment below and my response.) Below source shows a slight increase in the gap, but of <5% over 20 years. If you look further down my post the Economist has the opposing conclusion, but again very thin margins. Mostly I think the right conclusion is that the productivity gap has barely changed relative to demographic factors.
I'm not really sure where the meme that there's some big / growing productivity difference due to regulation comes from, but I've never seen supporting data. To the extent culture or regulation is affecting that growth gap, it's almost entirely going to be from things that affect total working hours, e.g. restrictions on migration, paid leave, and lower birth rates[1], not from things like how easy it is to found a startup.
https://www.economist.com/graphic-detail/2023/10/04/productivity-has-grown-faster-in-western-europe-than-in-america
Fertility rates are actually pretty similar now, but the US had much higher fertility than Germany especially around 1980 - 2010, converging more recently, so it'll take a while for that to impact the relative sizes of the working populations.
NickLaing @ 2025-04-03T16:01 (+9)
Most changed mind votes in history of EA comments? This blew my mind a bit, I feel like I've read so much about American productivity outpacing Europe, think this deserves a full length article.
Ben_Westđ¸ @ 2025-04-03T15:41 (+19) in response to Anthropic is not being consistently candid about their connection to EA
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
NickLaing @ 2025-04-03T15:57 (+4)
That's interesting and I'm sad to hear about people declining jobs due those reasons. On the other hand though some leadership jobs might not be the right job fit if they're not up for that kind of critique. I would imagine though there are a bunch of ways to avoid the "EA limelight" for many positions though, of course not public facing ones.
Slight quibble though I would consider "Jane Doe sucks for these reasons" an order of magnitude more objectionable than quoting a wedding website to make a point. Maybe wedding website are sacrosanct in a way in missing tho...
NickLaing @ 2025-04-03T05:29 (+6) in response to Anthropic is not being consistently candid about their connection to EA
Wow again I just haven't moved in circles where this would even be considered. Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue. Although we all have a "public profile" in theory, very few people are famous/powerful enough for it to count.
I don't think I believe in a public/private disconnect but I'll think about it some more. I believe in integrity and honesty in most situations, especially when your are publicly disparaging a movement. If you have chosen to lie and smear a movement with"My impression is that it's a bit of an outdated term" then I think this makes what you say a bit more fair game than for other statements where you aren't low-key attacking a group of well meaning people.
Ben_Westđ¸ @ 2025-04-03T15:41 (+19)
Hmm yeah, that's kinda my point? Like complaining about your annoying coworker anonymously online is fine, but making a public blog post like "my coworker Jane Doe sucks for these reasons" would be weird, people get fired for stuff like that. And referencing their wedding website would be even more extreme.
(Of course, most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here. I can tell you that a non-trivial number of people I've wanted to hire for leadership positions in EA have declined for reasons like "I don't want people critiquing my personal life on the EA Forum" though.)
Julia_Wiseđ¸ @ 2025-04-03T15:35 (+2) in response to Save PEPFAR picnic
Want to share this in the Boston EA Facebook group? https://www.facebook.com/groups/1552072601751317
SummaryBot @ 2025-04-03T15:28 (+1) in response to Announcement: New Services for Capacity Building in Nonprofits
Executive summary: WorkStream Nonprofit has launched new service offeringsâincluding executive assistant support, bookkeeping, tech implementation, and hiring helpâto strengthen nonprofit operational capacity and impact, alongside free resources and an upcoming accelerator program.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2025-04-03T15:27 (+1) in response to EAGx CDMX 2025 â EA UPY Perspective
Executive summary: EA UPYâs coordinated and active participation in EAGx CDMX 2025 fostered individual growth, community building, and meaningful connections, with strong pre-event preparation enabling a highly impactful experience for members.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Benevolent_Rain @ 2025-04-03T10:31 (+4) in response to Ozzie Gooen's Quick takes
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
Joseph @ 2025-04-03T15:05 (+6)
As a single data point: seconded. I've explicitly been asked by interviewers (in a job interview) why I left a "higher title job" for a "lower title job," with the implication that it needed some special justification. I suspect there have also been multiple times in which someone looking at my resume saw that transition, made an assumption about it, and choose to reject me. (although this probably happens with non-EA jobs more often than EA jobs, as the "lower title role" was with a well-known EA organization)
Benevolent_Rain @ 2025-04-03T10:31 (+4) in response to Ozzie Gooen's Quick takes
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
Ozzie Gooen @ 2025-04-03T14:59 (+2)
Good point. And sorry you had to go through that, it sounds quite frustrating.
Vasco Grilođ¸ @ 2025-04-03T13:33 (+2) in response to New guides on how to (actually) get a job
Thanks for putting this together! Looks great.
Lorenzo Buonannođ¸ @ 2025-03-30T14:28 (+68) in response to Anthropic is not being consistently candid about their connection to EA
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
https://x.com/AmandaAskell/status/1905995851547148659
That's false: https://en.wikipedia.org/wiki/Artificial_consciousness
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
Lukas_Gloor @ 2025-04-03T12:36 (+11)
I never interpreted that to be the crux/problem here. (I know I'm late replying to this.)
People can change what they identify as. For me, what looks shady in their responses is the clusmy attempts at downplaying their past association with EA.
I don't care about it because I still identify with EA; instead, I care because it goes under "not being consistently candid." (I quite like that expression despite its unfortunate history). I'd be equally annoyed if they downplayed some significant other thing unrelated to EA.
Sure, you might say it's fine not being consistently candid with journalists. They may quote you out of context. Pretty common advice for talking to journalists is to keep your statements as short and general as possible, esp. when they ask you things that aren't "on message." Probably they were just trying to avoid actually-unfair bad press here? Still, it's clumsy and ineffective. It backfired. Being candid would probably have been better here even from the perspective of preventing journalists from spinning this against them. Also, they could just decide not to talk to untrusted journalists?
More generally, I feel like we really need leaders who can build trust and talk openly about difficult tradeoffs and realities.
Uni Groups Team @ 2025-04-03T12:30 (+1) in response to Should EA group organisers (still) recommend 80K advising 'by default'?
~ Uni Groups / Jemima
titotal @ 2025-03-29T10:13 (+2) in response to Will explosive growth stem primarily from AI R&D automation?
I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error.
In R&D, you're trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work.
OscarDđ¸ @ 2025-04-03T12:27 (+2)
Yes, this seems right, hard to know which effect will dominate. I'm guessing you could assemble pretty useful training data of past R&D breakthroughs which might help, but that will only get you so far.
Kristof Redei @ 2025-03-28T16:42 (+1) in response to Bridging Worldviews: Tantric Retreat Centre Goes Earning to Give
This is super interesting and rhymes a bit with my own efforts to connect the EA community with another one that also has overlap in values but a distinct culture (harm reduction). I took a lot from another earlier post on this topic as well: https://forum.effectivealtruism.org/posts/8Qdc5mPyrfjttLCZn/learning-from-non-eas-who-seek-to-do-good
It's cool to see the members of the tantric retreat were open to learning from EA - are there any learnings you think this community in turn offers to EA?
Jonathan MoregĂĽrd @ 2025-04-03T12:14 (+1)
I see a lot of value in some of the practices, skills when it come to being attuned to emotions and mind states, and communication norms that allow issues to be brought up and handled in meetings.
This avoids failure modes like resentment building up over time, or unacknowledged resentment/tiredness/stress affecting the outcomes of meetings and interactions.
I have a substack where I write about a lot of different topics, including presenting some ideas I believe can be helpful to EA/lw audiences. honestliving.substack.com.
Right now I'm doing a piece on breathwork, as a tool for rapid stress decrease, alertness increase, and "resetting" thinking patterns and state of mind. I have talked to some eas who get stuck in rabbit holes/sub-branches of a problem, and find themselves unstuck the next morning, with sleep "resetting" some unacknowledged assumptions. Breathwork do the same for me, but quicker. Picking it up and becoming used takes max 20min x a week, with low risk if handled with care.
Ozzie Gooen @ 2025-03-30T22:22 (+51) in response to Ozzie Gooen's Quick takes
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance.
The EA Community Context
In the EA community, some aspects of this are tricky. The funders very much want to attract new and exciting talent. But this means that the older talent is in an awkward position.
The most successful get to take advantage of the influx of talent, with more senior leadership positions. But there aren't too many of these positions to go around. It can feel weird to work on the same level or under someone more junior than yourself.
Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.
Similar Patterns in Other Fields
This reminds me of law firms, which are known to have "up or out" cultures. I imagine some of this acts as a formal way to prevent this status challenge - people who don't highly succeed get fully kicked out, in part because they might get bitter if their career gets curtailed. An increasingly narrow set of lawyers continue on the Partner track.
I'm also used to hearing about power struggles for senior managers close to retirement at big companies, where there's a similar struggle. There's a large cluster of highly experienced people who have stopped being strong enough to stay at the highest levels of management. Typically these people stay too long, then completely leave. There can be few paths to gracefully go down a level or two while saving face and continuing to provide some amount of valuable work.
But around EA and a lot of tech, I think this pattern can happen much sooner - like when people are in the age range of 22 to 35. It's more subtle, but it still happens.
Finding Solutions
I'm very curious if it's feasible for some people to find solutions to this. One extreme would be, "Person X was incredibly successful 10 years ago. But that success has faded, and now the only useful thing they could do is office cleaning work. So now they do office cleaning work. And we've all found a way to make peace with this."
Traditionally, in Western culture, such an outcome would be seen as highly shameful. But in theory, being able to find peace and satisfaction from something often seen as shameful for (what I think of as overall-unfortunate) reasons could be considered a highly respectable thing to do.
Perhaps there could be a world where [valuable but low-status] activities are identified, discussed, and later turned to be high-status.
The EA Ideal vs. Reality
Back to EA. In theory, EAs are people who try to maximize their expected impact. In practice, EA is a specific ideology that typically has a limited impact on people (at least compared to strong Religious groups, for instance). I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so), and has created an ecosystem that rewards people for certain EA behaviors. But at the same time, people typically feature with a great deal of non-EA constraints that must be continually satisfied for them to be productive; money, family, stability, health, status, etc.
Personal Reflection
Personally, every few months I really wonder what might make sense for me. I'd love to be the kind of person who would be psychologically okay doing the lowest-status work for the youngest or lowest-status people. At the same time, knowing myself, I'm nervous that taking a very low-status position might cause some of my mind to feel resentment and burnout. I'll continue to reflect on this.
Benevolent_Rain @ 2025-04-03T10:31 (+4)
A related issue I have actually encountered is something like "but you seem overqualified for this role we are hiring for". Even if previously successful people wanted to take a "less prestigious" role, they might encounter real problems in doing so. I hope the EA eco system might have some immunity to this though - as hopefully the mission alignment will be strong enough evidence of why such a person might show interest in a "lower" role.
Neel Nanda @ 2025-04-02T00:57 (+6) in response to What if I'm not open to feedback?
Positive feedback: Great post!
Negative feedback: By taking any public actions you make it easier for people to give you feedback, a major tactical error (case in point)
frances_lorenz @ 2025-04-03T10:23 (+4)
Hey Neel! This reply upset me so much that I'm now planning to make AGI and actively oppose AI safety :) Hope it was worth it!
Lorenzo Buonannođ¸ @ 2025-03-30T14:28 (+68) in response to Anthropic is not being consistently candid about their connection to EA
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
https://x.com/AmandaAskell/status/1905995851547148659
That's false: https://en.wikipedia.org/wiki/Artificial_consciousness
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
David Mathersđ¸ @ 2025-04-03T10:20 (+8)
Just as a side point, I do not think Amanda's past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It's not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don't have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn't a public figure back then, but I'd be genuinely shocked to find out she didn't have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I'm accusing her of dishonesty here or anything: she didn't say that she wasn't EA or that she had never been EA, just that Anthropic wasn't an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
funnyfranco @ 2025-04-03T00:25 (â1) in response to Why We Need a Beacon of Hope in the Looming Gloom of AGI
That's why I write my essays and try and get the word out. Because even if the rope is tight around your neck and there seems like no way to get out of it, you should still kick your feet and try.
Beyond Singularity @ 2025-04-03T09:59 (+1)
I think itâs good â essential, even â that you keep trying and speaking out. Sometimes thatâs what helps others to act too.
The only thing I worry about is that this fight, if framed only as hopeless, can paralyze the very people who might help change the trajectory.
Despair can be as dangerous as denial.
Thatâs why I believe the effort itself matters â not because it guarantees success, but because it keeps the door open for others to walk through.
Mo Putera @ 2025-04-03T08:19 (+11) in response to Comparing the Impact of Donating a Portion of Income from a High-Paying Job vs. Pursuing 80,000 Hours Careers
A few quick reactions in case they're helpful:
winter_spirals @ 2025-04-03T08:01 (+1) in response to Will AI R&D Automation Cause a Software Intelligence Explosion?
Itâs an interesting hypothesis. I think one way in which a SIE can be encouraged is through AI and data enabled financial / risk modelling of any given R&D project.
I was writing on this yesterday, serendipitously!
AI financial risk quantification might significantly improve the accuracy of priors or other probabilistic model variables that evaluate any given R&D IP for a market, meaning we might well be on the cusp of a gradual transition to an economy that is increasingly (one day entirely?) R&D focused, on the assumption that AIs or AI enabled R&D is more likely to perform competitively, either for direct or indirect reasons (I think the psychological component of AI augmenting the way people think about (or copilot to solve) problems is still an open area for approachâŚ).
SiobhanBall @ 2025-04-03T07:49 (+2) in response to Announcement: New Services for Capacity Building in Nonprofits
Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.
So this is a cool business of which I was previously unaware, so thanks for posting.
A key question that came to mind when reading your post and site was: whatâs stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way.
Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?
gergo @ 2025-04-03T07:44 (+2) in response to The Short Timelines Strategy for AI Safety University Groups
Great post, thanks for sharing!
shepardriley @ 2025-04-03T07:25 (+2) in response to The EA University Groups' Prisonerâs Dilemma!
May the best group win!
It sure sounds like it :)
Patrick Hoang @ 2025-04-01T01:41 (+2) in response to The EA University Groups' Prisonerâs Dilemma!
I defected! Everyone, if you want to lose, choose DEFECT
shepardriley @ 2025-04-03T07:24 (+2)
On behalf of EA Manchester, I truly appreciate it.
Mo Putera @ 2025-04-02T05:15 (+34) in response to Mo Putera's Quick takes
I spent most of my early career as a data analyst in industry, which engendered in me a deep wariness of quantitative data sources and plumbing, and a neverending discomfort at how often others tended to just take them as given for input into consequential decision-making, even if at an intellectual level I knew their constraints and other priorities justified it and they were doing the best they could. ...and then I moved to global health applied research and realised that the data trustworthiness situation was so much worse I had to recalibrate a lot of expectations / intuitions.
In that regard I appreciate GiveWell's new guidance on burden note:
The rest of the note was cathartic to skim-read. For instance, when I looked into the idea of distributing low-cost glasses to correct presbyopia in low-income countries awhile back (a problem that afflicts over 1.8 billion people globally with >$50 billion in annual lost potential productivity annually in LMICs alone), the industry data analyst in me was dismayed to learn that the WHO didn't even collect data on how many people needed glasses prior to 2008, so governments and associated stakeholders understandably prioritised allocation of resources towards surgical and medical interventions instead. I think the existence of orgs like IHME and OWID greatly improve the GHD data situation nowadays, but there are many "pockets" where it remains a far cry from what it could be, so I appreciated that GiveWell said they're considering
Another example: a fair bit of my earlier analyst work involved either reconciling discrepant figures for ostensibly similar metrics (e.g. campaign revenue breakdowns etc) or root-cause analysing-via-data-plumbing whether a flagged metric needed to be acted on or was a false positive, which made me appreciate this section:
NickLaing @ 2025-04-03T07:02 (+4)
This is fantastic to hear! The Global burden of disease process (while the best and most reputable we have) is surprisingly opaque and hard to follow in many cases. I haven't been able to find the spreadsheets with their calculations.
Their numbers are usually reasonable but bewildering in some cases and obviously wrong in others. GiveWell moving towards combining GBD with other sensible models is a great way forward.
Its a bit unfortunate that the best burden of disease models we have aren't more understandable.
LegSports @ 2024-08-26T19:39 (+6) in response to Report: The Broken State of Animal Advocacy in Universities
Thanks for the interesting summary of campus activism! A few questions that came to mind while reading this:
Dylan Richardson @ 2025-04-03T06:56 (+1)
My two cents is that "brand consistency" is interesting, because brands reflect, roughly, the strain of vegan club that it is, whether associated with particular activist networks, whether it's more vegetarian than vegan or something else. The level of inconsistency is also indicative of a lack of coordination across groups.
My experience in university was that the local club was a bit of an awkward merge between a social club and people with a particular activist agenda (very visible demonstrations against animal labs). In a sense, the career building approach of Alt Protein Projects or the cause agnosticism of EA groups may be better at attracting members. But I'm not sure.
Jeffrey Kursonis @ 2025-04-03T06:55 (+3) in response to Centre for Effective Altruism Is No Longer "Effective Altruism"-Related
So glad Consequentialism is out, and we can finally follow our feelings!! it feels so dang good. I love being a human with feelings. All these years denying it to follow mathematical algorithms like a robot was tiring. Feelings are super effective!! I suppose we'll need a new introduction course where we explain to smart people what feelings are, and how they are already included and fully installed in us but we just need to put a check mark in that one box to turn them on, and boom when you do that suddenly you see a whole new world and there's lots of art everywhere too. Finally EA will have some art, coz us feeling humans of course demand it.
Jeffrey Kursonis @ 2025-04-03T06:44 (+1) in response to 80,000 Hours: Job Board -> Job Birds
A good Tucan always goes down well. And what's cool is they like cereal, so you bond together in the morning!
SiobhanBall @ 2025-04-02T08:33 (+7) in response to Centre for Effective Altruism Is No Longer "Effective Altruism"-Related
I went even further. I added adjacent three times and ended up back where I started.
Davidmanheim @ 2025-04-03T06:42 (+2)
I think it's better to play 5d chess - so I'm EA-adjacent-adjacent-adjacent-adjacent-adjacent.
Melanie Brennan @ 2025-04-03T05:45 (+3) in response to Centre for Effective Altruism Is No Longer "Effective Altruism"-Related
This post made my day (to be fair, it's only 7:40am, but whatever, I doubt anything else can put such a big smile on my face in the remaining 15+ hours).
Great article, Centre for what now? đ
Ben_Westđ¸ @ 2025-04-03T03:14 (+16) in response to Anthropic is not being consistently candid about their connection to EA
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
NickLaing @ 2025-04-03T05:29 (+6)
Wow again I just haven't moved in circles where this would even be considered. Only the most elite 0.1 percent of people can even have a meaningful "public private disconnect" as you have to have quite a prominent public profile for that to even be an issue. Although we all have a "public profile" in theory, very few people are famous/powerful enough for it to count.
I don't think I believe in a public/private disconnect but I'll think about it some more. I believe in integrity and honesty in most situations, especially when your are publicly disparaging a movement. If you have chosen to lie and smear a movement with"My impression is that it's a bit of an outdated term" then I think this makes what you say a bit more fair game than for other statements where you aren't low-key attacking a group of well meaning people.
Mo Putera @ 2025-04-03T04:37 (+4) in response to Want To Be An Expert? Build Deep Models
Alternative & complementary response: which experts? Why them, instead of these other experts who disagree with the former? How can you tell if you're (say) being misled? To quote John Wentworth:
John also suggests that the kind of deep model you want to build is gears-level models (that link has a lot of examples across various domains):
John has some advice on how to read papers to build gears-level models, although for most situations I prefer Sarah Constantin's advice to do fact-posting.
MichaelDickens @ 2025-04-03T04:06 (+4) in response to Red-teaming PowerSmoothie.org by Holden Karnofsky
I've tried putting olive oil in smoothie-adjacent concoctions (calling the things I've made "smoothies" would be an insult to smoothies) and it always makes me nauseous.
One time, due to poor planning, the only thing I had available to eat all day was an olive-oil-based smoothie-adjacent beverage, and I still couldn't manage to choke it down.
NickLaing @ 2025-04-01T13:28 (+4) in response to Anthropic is not being consistently candid about their connection to EA
That's interesting I think I might move in different circles. Most people I know would not really understand the concept of there being a PR world where your present different things from your personal life
Perhaps you move in more corporate or higher flying circles where this kind of disconnect is normal and where its fine to have a public/private communication disconnect which is considered rude to challenge? Interesting!
Ben_Westđ¸ @ 2025-04-03T03:14 (+16)
fwiw I think in any circle I've been a part of critiquing someone publicly based on their wedding website would be considered weird/a low blow. (Including corporate circles.) [1]
I think there is a level of influence at which everything becomes fair game, e.g. Donald Trump can't really expect a public/private communication disconnect. I don't think that's true of Daniela, although I concede that her influence over the light cone might not actually be that much lower than Trump's.
Luke Eure @ 2022-08-23T14:48 (+4) in response to Protest movements: How effective are they?
Thank you! Agreed that EA as a community often overlooks the value of protests and social change. Excited to look more deeply into the report
On âbackfireâ - do you have any view on backfire of BLM protests? Iâve been concerned with the pattern of protest -> police stop enforcing in a neighborhood -> murder rates go up. Seems like if this does happen, it really raises the bar as the long run positive effects protests like this need to achieve in order to offset the medium term murder increase.
But maybe Iâm thinking of this wrong. Or maybe this wouldnât be considered backfire - more of an unintended side effect?
Source: https://marginalrevolution.com/marginalrevolution/2022/06/what-caused-the-2020-spike-in-murders.html
MichaelDickens @ 2025-04-03T03:12 (+4)
I'm a bit late to the party but:
I wouldn't consider this a "backfire", although murder rates going up is definitely a bad thing. In the context of protests, a backfire isn't when anything bad happens, it's when the protests hurt the protesters' goals. If "police stop enforcing in a neighborhood" is a goal of BLM protests (which it basically is), then this is a success, not a backfire, and the increase in murder rate is an unfortunate consequence.
A backfire effect would be something like: protest -> protests make people feel unsafe -> city allocates more funding to the police.
Karen Singleton @ 2025-04-03T02:29 (+7) in response to How should we adapt animal advocacy to near-term AGI?
Thank you for this post. I think it does a great job of outlining the double-edged sword we're facing - - the potential for AI to either end enormous suffering or amplify it exponentially.
Your suggestion to reframe our movement's goal really expanded my thinking: "ensure that advanced AI and the people who control it are aligned with animals' interests by 2030." This feels urgent and necessary given the timelines you've outlined.
I'm particularly concerned that our society's current commodified view of animals could be baked into AGI systems and scaled to unprecedented levels.
The strategic targets you've identified make perfect sense - especially the focus on AI/animal collaborations and getting animal advocates into rooms where AGI decisions are being made. We should absolutely be leveraging AI-powered advocacy tools while we can still shape their development.
Thank you for this clarity. I'll be thinking much more deeply about how my own advocacy work needs to adapt to this possible near-future scenario.
Marcus Abramovitch đ¸ @ 2025-04-02T06:49 (+57) in response to Anthropic is not being consistently candid about their connection to EA
I understand why people shy away from/hide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that harms the movement in a way where the costs are externalized.
When PG refers to keeping your identity small, he means don't defend it or its characteristics for the sake of it. There's nothing wrong with being a C/C++ programmer, but realizing it's not the best for rapid development needs or memory safety. In this case, you can own being an EA/your affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.
I'm a proud EA.
Angelina Li @ 2025-04-03T02:23 (+6)
FWIW, I appreciated reading this :) Thank you for sharing it!
I so agree! I think there is something virtuous and collaborative for those of us who have benefited from EA and its ideas / community to just... being willing to stand up and say simply that. I think these ideas are worth fighting for.
<3