Third Wave Effective Altruism
By Ben_Westđ¸ @ 2023-06-17T15:45 (+225)
This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model â see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic â I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
Itâs not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a âwaveâ which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained â Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible âthird waveâ â chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to âfirst waveâ EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- Iâm interested in organizing more projects like EA Strategy Fortnight. I donât feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that weâre still in wave 2, argue we might be moving towards wave 3 but shouldnât be, etc.).
- Iâm also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become âa thingâ without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. âWavesâ terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word âlongtermismâ as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
Michael_PJ @ 2023-06-19T16:40 (+45)
I think this is a useful exercise for a few reasons.
- It's helpful for outsiders and new people to reconcile different material that they see from different points in time.
- It's helpful for people to clarify if they hold a bundle of positions that's associated with a particular wave. The old waves don't necessarily go away, they just cease to dominate. It's quite handy to be able to say "To understand my positions it may help to know that I'm a first-wave EA" or whatever.
I would be tempted to divide the second wave in two. I think there was a distinct period where career choice and talent constraints became a dominant theme, but before longtermism took off. So I'd say:
- As you have it
- Talent constraints become prominent, 80k becomes prominent, money becomes looser as OPP becomes a major player etc.
- Longtermism becomes prominent, money becomes silly as FTX enters the scene, WWOTF
Not sure on dates. This gives us a very short third wave, but I think that matches reality: the FTX crisis and AI panic killed/substantially changed it.
Also, I think an interesting row for your table would be "Prominent thinkers". It's interesting that in other movements the waves are typically spearheaded by new people and thinkers, whereas in our case it's often the same people again. Some extremely incomplete lists:
- Singer, Ord, Karnofsky ...
- Todd, MacAskill ...
- Ord, Cotton-Barratt, MacAskill, Bostrom ...
I also think this can help us with the question of "where is the next wave going?". We can instead ask "who are the thinkers who are gaining prominence?". It seems to me that there's a bit of a void, except in the AIS space, so maybe that will dominate by default.
levin @ 2023-06-18T19:12 (+40)
I agree that we're now in a third wave, but I think this post is missing an essential aspect of the new wave, which is that EA's reputation has taken a massive hit. EA doesn't just have less money because of SBF; it has less trust and prestige, less optimism about becoming a mass movement (or even a mass-elite movement), and fewer potential allies because of SBF, Bostrom's email/apology, and the Time article.
For that reason, I'd put the date of the third wave around the 10th of November 2022, when it became clear that FTX was not only experiencing a "liquidity crisis" but had misled customers, investors, and the EA community and likely committed massive fraud, and when the Future Fund team resigned. The other features of the Third Wave (the additional scandals and the rise in public interest in AI safety due to ChatGPT, GPT-4, the FLI letter, the CAIS statement, and so on) took a few months to emerge, but that week seems like the turning point.
Ben_West @ 2023-06-18T21:52 (+29)
Thanks! You may be interested in my recent post with Emma which found that FTX does not seem to have greatly affected EA's public image.
Jason @ 2023-06-18T23:10 (+25)
But I think the more important question is: what will the ultimate impact on public image be when EA really needs public support (e.g., for AI regulation) against powerful interests (e.g., big tech companies pushing toward AGI) who have every incentive to educate (fairly or otherwise) the public about SBF/EA connections?
Right now, the question has low salience even for the minority who have heard of EA. I'm not sure how well low-salience opinion will correlate to opinion after all sides take their best shots on a high-salience issue.
levin @ 2023-06-19T16:13 (+8)
That is a useful post, thanks. It changes my mind somewhat about EA's overall reputational damage, but I still think the FTX crisis exploded the self-narrative of ascendancy (both in money and influence), and the prospects have worsened for attracting allies, especially in adversarial environments like politics.
Ben_West @ 2023-06-19T17:59 (+5)
Yep, FTX's collapse definitely seems bad for EA!
Nathan Young @ 2023-06-19T16:52 (+2)
Is this your own experience, something you are confident of or something you guess? If the first two I might move towards you more.
ElliotJDavies @ 2023-08-09T13:06 (+2)
FWIW I have strong agreement from personal experience
Nathan Young @ 2023-06-18T20:15 (+17)
I sense this is true internally but not externally. I don't really feel like our reputation has changed much in general.
Maybe among US legislators? I don't know.
Lizka @ 2023-06-20T16:56 (+33)
I'm glad Ben shared this post!
I'm not sure how much I agree with the framework â or at least the idea that we're entering a third wave, but this seems like a useful tool/exercise.
Here's one consideration that comes to mind as I think about whether we're entering a third wave (written quickly, sorry in advance!).
I've got competing intuitions:
- We tend to over-react to (or over-update on) changes that seem really huge but end up not affecting priorities/ the status quo that much.
- E.g. I think some events feel like they'll have a big effect, but they're actually just big in the news or on Twitter for a few weeks, and then everyone goes back to something pretty normal. Or relatedly, when something really bad happens and is covered by the news (e.g. an earthquake, or some form of violence): we might feel pressure to donate to a relevant charity, make a public statement, etc., when actually we should keep working on our mostly unrelated projects.
- At the same time, I think we tend to under-react and are too slow to make important changes based on things happening in the world. It's too easy to believe that everything is normal (while in reality, futures are wild). We're probably attached to projects (don't want to stare into the abyss) and probably dismiss some ideas/predictions as too weird without giving them enough consideration.
- COVID is probably an important example here (people weren't updating fast enough), and I can think of some other examples from my personal life.
My best guess (not resilient and pretty vague) is that we're generally too slow to update on in-the-world changes (that aren't about other people's views or the like), and too quick to update on ~memes in our immediate surroundings or our information sources/networks. I tentatively think that (public) opinion does in fact change a lot, but those changes are generally slower, and that we should be cautious about thinking that opinion-like changes are big, since small/local changes can feel huge/permanent/global.
So: to the extent that the idea that we're entering a third wave is based on the sense that AI safety concerns are going mainstream, I feel very unsure that we're interpreting things correctly. We have decent (and not vibes-based) signals that AI safety is in fact going mainstream, but I'm still pretty unsure if things will go back to ~normal. Of course, other things have also changed; specific influential people seem to have gotten worried, it seems like governments are taking AI (existential) risk seriously, etc. â these seem less(?) likely to revert to normal (although I'm just guessing, again). I imagine that we can look at past case studies of this and get very rough ~base rates, potentially â I'd be very interested.
(I have some other concerns about using/believing this model, but just wanted to outline one for now.)
I'll also share some notes/comments I added on a slightly earlier draft. I haven't read the comments carefully, so at least some of this is probably redundant.
Some other possible "third waves" (very quick brainstorm)
- Attempting to stay relevant: AI safety blows up, EA still has a lot of people who have been thinking about AI safety for a long time and feel like they should be contributing, but they donât catch on to the fact that theyâre now 100x smaller fraction of the field, and not the biggest players anymore. (Also seems possible that they're the experts and suddenly have lots of work, but it doesn't seem like a certain thing.)
- EA grows: AI attention brings a lot of attention to EA somehow, and EA grows a bunch through unusual pathways (unusual for us), everything else is similar (maybe this is the 4th wave somehow â hinges on something that hasnât happened). The main updates are about the size of the movement/network (what would EA look like if it had 20x more people?), and its composition (later-career folks, etc.)
- "Effective AIS": Little changes from now from EAâs POV except that AI safety is big outside of EA, but most of that is ~ineffective for one reason or another. At the same time, thereâs a fair amount of funding for âeffective AI safetyâ work (possibly something similar to what happens with effective climate work)
- I.e. a lot of stuff gets "AI safety" that's not really AI safety (or is just not great). But big donors are interested in AI (existential) safety and there are people in ~EA-adjacent spaces who are attracted to EAxAIS because of competence and reasonableness of arguments; donors are excited about funding this kind of thing. We need to work on making work like this legible. We need a version of Longview/FP but for AI safety.
- Alternatively: AI safety becomes super politicized and people donât want to work with AI companies, so EAs are the only ones doing that.
- Alternatively: AI safety (in the popular understanding) becomes very strongly about something like copyright issues/bias/unemployment ("we shouldn't be distracted from the real problems today")
- Etc.
- "Back to normal+puddles": Attention on AI safety passes. Things are very similar except thereâs a ~quiet and occasionally noisy archipelago of AI-safety-oriented communities/projects (think puddles after a storm).
- Some people think that EA is âthe AI safety thingâ and confuse EA with that (like they still do with earning to give sometimes).
- The "third wave" might be prompted by something that isn't AI-related. Some possible scenarios:
- Something potentially FTX-related leads to the EA brand becoming toxic.
- The EA network sees a schism along something like GHD vs. non-GHD, âlongtermismâ vs not, "weird" vs not, etc.
- Alternatively, there's an overall fracturing into loosely-grouped and loosely-networked focus areas, like effective GHD, effective FAW, WAW, maybe pandemic preparedness, AI safety, AI governance, ~cause prioritization research, etc. Some organizations and groups focus on letting donors evaluate projects across a wide space given their priorities and philosophies (or giving career advice).
- EA has just grown too big to be useful to coordinate around, and weâre seeing what looks like the beginnings of a ~healthy fracturing (which in reality might be past the point of no return); weâre shifting to a model where there are cause-specific communities that are friendly to each other, and some orgs work across them and keep an eye on them, etc.
- Stuff that might happen that could change things fast
- Big war
- Big politicization moment of AI, or AI safety becomes very strongly about something like copyright issues/bias/unemployment
- Really scary AI thing that makes people really freaked out
- Something weird happens with labs (E.g. government does something strange) and they become super uncooperative?
- New pandemic
- Significantly more bad press about EA
- Huge endorsement of EA somehow / viral moment
- ~research becomes automated
- Etc.
ChanaMessinger @ 2023-06-22T14:10 (+9)
I like the distinction between overreacting and underreacting as being "in the world" vs. "memes" - another way of saying this is something like "object level reality" vs. "social reality".
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn't really socially involved at the time).
So to the extent that this is about "what's happening to EA" I think there's clearly a third wave here, where people are running and getting funded to run AI specific groups, people are doing policy and advocacy in a way I've never seen before.
If this ends up being a flash in the pan, then maybe the way to see this is something like a "trend" or "fad", like maybe 2022-spending was.
Which maybe brings me to something like "we might want these waves to consistently be about "what's happening in EA" vs "what's happening in the world", and they're currently not.
Denis @ 2023-06-23T23:23 (+29)
I see a world that still desperately needs Wave 1, and I see a lot of work still to be done in that area.
I look at the effectivealtriusm.org homepage, and a lot of what is mentioned there is still what you're referring to as wave 1.
I would even venture that in most of the world (perhaps outside the hubs), people are drawn to EA first by Wave 1 concepts. We get frustrated at the poverty and disease and war and poor governance and refugee crises and famines and ... and we wonder why can't we do more to fix these with the significant resources we do devote to them. We see a group like EA looking at how to use limited resources to help people in the most effective way possible and it seems like a critical answer to a long-neglected question.
Is it possible that what you're describing here is the cutting-edge aspects of EA - the areas where EA is breaking new ground philosophically and analytically, the areas which create lively, passionate debates on this forum, for example? And so, naturally, the ideas for the future come from areas like AI and longtermism. But a lot of vital EA work doesn't have to be cutting-edge research.
But IMHO there is still a massive opportunity to help most of the world's population in very concrete, tangible ways, and effective altruists can make vital contributions. You write the the goal of wave 1 was "donations to effective charities" - but this is a quite limited reading of what EA can do. How about influencing how governments spend their aid budgets, which is often very differently from how they would be most effective? There are a few groups doing this kind of work (e.g. Gates Foundation), but there is still so much aid and donations being inefficiently spent. Ideas as simple as how to convince governments to just give people in developing countries cash rather than spending 10X that much trying to solve their problems for them.
I know that part of the vision is to focus on areas which are neglected, but I see a big difference between working in an area that is neglected (which is not true of global poverty and disease, for example) and working in a way that is neglected (quantitative, analytical, data-driven) even in areas which receive a lot of attention and even (badly-spent) money.
Apologies if this feels ill-informed. I'm writing as someone who isn't in any of the hubs and so just seeing EA from the "outside."
Ben_West @ 2023-06-28T19:42 (+4)
Yeah interesting, this seems right to me and useful, thanks for the pushback.
Jason @ 2023-06-18T18:43 (+25)
It seems that one assumption underlying this frame is that EA will largely continue as a unified enterprise, perhaps with some cause areas "spinning off" to mainstream.
I think the possibility of breakup is real enough to include as a possibility in one's model, at least as a footnote. That could be a collaborative divorce, or a messy one:
- An example of a collaborative divorce would be a mix between the second and third possible future scenarios. "Forefront of weirdness" and "return to first wave" have some real tensions, and it's plausible that a significantly greater degree of separation would help each group better achieve its own goals.
- A messy divorce could be caused by (e.g.) a further loss of confidence in centralized institutions due to FTX fallout, or worsening reports of sexual assault in some subcommunities, leading a significant fraction of people to break off for various reasons.
Michael_PJ @ 2023-06-19T16:42 (+11)
A "together but divided" future is also possible. There are many divisions in e.g. feminism, but everyone still wants to lay claim to the big banner, and will generally regard the others as allies to some degree (even if they write scathing critiques of each other).
Chris Leong @ 2023-06-30T02:40 (+4)
This felt much more likely before when the forum consisted of continuous drama, but it seems to have settled down now. However, there could potentially be another wave during SBF's trial.
TylerMaule @ 2023-06-20T11:09 (+17)
Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]...[3] Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
Looking at this table, I expect the non-FTX total is about the same[1]âI'd wager that there is more funding commited now than during the first ~70% of the second wave period.[2]
I think most people have yet to grasp the extent to which markets have bounced back:
- The S&P 500 Total Return Index is within 6% of its all-time high; only ever spent ~4 months above today's value
- META is -26% from ATH, but now highest since Jan '22 and ~10 months ever above this price
- ^
Dustin's net worth looks to be about about -$7Bn from the peak (per his preferred source). Meanwhile, GiveWell (2-3x), Founders Pledge (3x), and GWWC (1.5x) numbers all seem to be higher
- ^
I still think that the waves framing is useful and captures the prevailing narrative tbc
Nathan Young @ 2023-06-18T16:11 (+14)
Fwiw, I think this kind of thinking is good but shows a deep need for synthesis. There is so much to read on the forum and we are poor at succinctly nailing down exactly what we thinking on areas where we basically agree or there is little disagreement.
As an example, most people do not have detailed maps of the sea but on our maps we agree where it is. This is because the exact location of the sea is not important or widely agreed. If every person making a map of their town needed to draw all of the sea, that would be a big waste of time.
Similarly, due to the main form of community knowledge transfer being blogs, there is little "things we all agree on", so many blogs end up covering ground covered in other blogs, because the authors or readers haven't read them (because there is too much to read).
The solution might seem to be more overviews from elites, but while I think elites make better choices on average I think that knowledge synthesis requires back and forth. I am much more likely to engage with what "we as a community think" if I feel like if a large chunk of the community disagreed then it would change.
Here, for instance, I think the push for longtermism was shorter than being the defining cause of this period.
To put it more concretely I think what the community should be is a job for elites, but what it is and currently believes is a job for some other process. I think we do not have that process and so everyone has to read way too much and write blogs that cover old ground. We have discussions many times without making ground because we can't focus on actual areas of disagreement.
This post is a partial solution - giving clear models of who we were and are - humans need narratives to move forward. But this process seems underrated - if all you have is the agreement of what the second wave was, then there is a lot of work needed to be done by current EAs to understand what typical EAs like themselves think and do. As humans that's a pretty good sign of what is safe to do. If we had a way to agree this information quicker, I would have more brain space for work, actual substantive disagreements etc and more trust that the community could come to concensus here.
Seems like underrated work, thanks for doing it.
Nathan Young @ 2023-06-18T16:16 (+12)
As a more sassy point here I'll say something like "we talk a great game about how we want to improve the world and have billions in resources but seem to have a very immature understanding of ourselves as a community".
It seems to me either we should have a lower opinion of ourselves or we should do some community introspection. If as a community we were a person, this level of reflection/synthesis seems more like that of a child than of a mature and well-integrated adult.
Who are we, what do we want, what do we fear, how do we deal with trauma, how do we change our minds? All of these are questions that a mature person can answer, that a reckless and powerful youth might not. I think we are closer to the latter than we think.
This is my weak view, not some kind of median view. Unless it gets lots of upvotes, in which case...
Chris Leong @ 2023-06-30T02:43 (+2)
"I think the push for longtermism was shorter than being the defining cause of this period" - How so?
LukeDing @ 2023-06-19T11:03 (+11)
One observation is that the first transition from 1st to 2nd wave was deliberate in that it was after a strategic review conducted by CEA, whilst the second transition was imposed by events. Perhaps the consequence of the first transition also has a influence (not sure how strong) on the trajectory of the second transition which is still unfolding.
basil.halperin @ 2023-06-19T22:08 (+16)
Any links on the referenced strategic review? Thanks!
MichaelPlant @ 2023-06-19T11:07 (+9)
Thanks for writing this up. I've often thought about EA in terms of waves (borrowing the idea from feminist theory) but never put fingers to keyboard. It'shard to do, because there is so much vagueness and so many currents and undercurrents happening. Some bits that seem missing:
You can identify waves within causes areas as well as between cause areas. Within 'future people', it seemed to go from X-risks to 'broad longtermism' (and I guess it's now going back to a focus on AI). Within animals, it started with factory-farmed land animals, and now seems to include invertebrates and wild-animals. Within 'present people', it was objective wellbeing - poverty and physical health - and now is (I think and hope) shifting to subjective wellbeing. (I certainly see HLI's work as being part of 2nd or 3rd wave EA).
Another trend is that EA initially seemed to be more pluralistic about what the top cause was ("EA as a question"), and then became more monistic with a push towards longtermism ("EA as an answer"). I'm not what that next stage is.
DPiepgrass @ 2023-06-20T00:47 (+11)
I think surely EA is still pluralistic ("a question") and it wouldn't be at all surprised if longtermism gets de-emphasized or modified. (I am uncertain, as I don't live in a hub city and can't attend EAG, but as EA expands, new people could have new influence even if EAs in today's hub cities are getting a little rigid.)
In my fantasy, EAs realize that they missed 50% of all longtermism by focusing entirely on catastrophic risk while ignoring the universe of Path Dependencies (e.g. consider the humble Qwerty keyboardâimpossible to change, right? Well, I'm not on a Qwerty keyboard, but I digress. What if you had the chance to sell keyboards in 1910? There would still be time to change which keyboard layout became dominant. Or what if you had the chance to prop up the Esperanto movement in its heyday around that time? This represents the universe of interventions EAs didn't notice. The world isn't calcified in every way yetâif we're quick, we can still make a difference in some areas. (Btw before I discovered EA, that was my angle on the software industry, and I still think it's important and vastly underfunded, as capitalism is misaligned with longtermism.)
In my second fantasy, EAs realize that many of the evils in the world are a byproduct of poor epistemics, so they work on things that either improve society's epistemics or (more simply) work around the problem.
ChanaMessinger @ 2023-06-22T14:15 (+2)
I like the point of waves within cause areas! Though I suspect there would be a lot of disagreement - e.g. people who kept up with the x-risk approach even as WWOTF was getting a lot of attention.
Jeffrey Kursonis @ 2023-06-18T08:31 (+6)
I like this framing, and here's some thoughts on movements from a movement veteran. First, it's obvious EA moved from raising money for effective charities focus to longtermism/X-risk and it interesting to see all the cultural flows of that in EA. And then it seems fairly obvious that the series of scandals from FTX to sexual harassment has had a reverberating shock wave affect in EA and that to me is the clearest sign EA is primed for a new wave, the third. I would date it not only by AI but also by some date averaging of these big difficult stories from Dec. 2022 to early 2023 where FTX was still everywhere while ChatGPT debuted globally and the sex stuff. The fact that EA is years ahead of any other organized movement in organizing on AI Safety means it can be a hub and that has value moving forward.
I think the really big question is this - what will the effect of all the trauma and embarrassment and people reassessing themselves and EA as a whole end up producing on the steering rudder of EA...where will it turn, what ways will it change? Comments on that would be very interesting.
Here's my comment, which if you were to read all my posts and comments you could see the trend: I don't know at all what direction the rudder will steer us toward, but I hope that it includes a huge cultural reformation surrounding Utilitarianism. One of the iconic quotes that would give me this idea is of an interview with SBF where he says he's a Benthamite Utilitarian very soon before he's revealed to be a historically awful fraudster who was spawned and enabled by a bunch of Benthamite Utilitarians calling themselves Effective Altruists.
Now I know some leading lights have spoken out recently that, "Aw we haven't really been hard core Benthamites...we've always been more balanced". I would say that's classic blind spot talk because EA you have no idea how strongly Utilitarian you come across to anyone from the outside...you are not at all balanced, you are hard core Utilitarian...if you think you're balanced you're just too inside to know how things look from the outside. I think what's happening mentally is you are smart enough to imagine the freakish' crazy side of extreme Utilitarianism and you know you aren't that...but that's because nobody is that freakish' except literally psychopath outliers who don't count, but instead you are still firmly situated in a kind of Utilitarianism which though balanced with some common sense which is both socially and literally unavoidable, is still very far over from most of your peers in non EA culture worlds.
I can imagine all the defensive comments saying it's not that bad, we're more balanced, but as I said above that's just being too inside to see from the outside - if there was any one major cultural thing that typified EA and EA people it would be utilitarianism...eating Huel alone at your desk so you can grind on to be more effective is to me the iconic image of that.
I know this is tough love, but I do dearly love EA...and I just want all to be happy and stop eating Huel alone at your desk and discover the joy of being with others and having an ice cream cone now and then. You'll be far more optimized by that to do your good work. Utilitarianism is optimizing for robots not for humans. Effective Altruists are humans helping humanity...optimize for humanness.
Brad West @ 2023-06-19T21:05 (+2)
It's amusing how you argue against hardcore utilitarianism by indicating that factoring in an agent's human needs is indispensable for maximizing impact. To the extent that being good to yourself is necessary for maximizing impact, a hardcore utilitarian would do so.
Utilitarianism is optimizing for whatever agent is operative... Humans or robots. It's just realizing that the experiences of other beings throughout space and time matter just as much as your own. There is nothing wrong with being extreme and impartial in your compassion for others, which is the essence of utilitarianism. To the extent you are lobbing criticisms of people not being effective because they're not taking care of themselves, it isn't a criticism of "hardcore" utilitarianism. It's a criticism of them failing to integrate the productivity benefits from taking care of themselves into the analysis.
Jeffrey Kursonis @ 2023-06-20T02:59 (+1)
Well yes your logic is perfect, but it's a lot like the logic of communism...if humans did communism perfectly it would usher in world peace and Utopia...the problem is not ideal communism, it's somehow it just doesn't fit humanity well. Yours is the exact same argument you would hear over and over when people still argued passionately for communism..."They're just not doing it right!!"...after a while you realize, it just isn't the right thing no matter how lovely on paper. Eventually almost all of them let go of that idealism, but it doggedly held on a long time and I'm sure that will be the case of many EA's holding on way to long to utilitarianism.
Hardly anything really does fit us...the best path is to keep iterating reachable modifications wherever you are...I can see the benefits of ideal utilitarianism and I appreciate early EA embracing it with gusto...it got fantastic results, no way to argue with that. To me EA is one of the brightest lights in the world. But I've been steering movements and observing them for many decades and it's clear to me that, as in the OP, EA is transitioning into a new phase or wave, and the point of resistance I come up against when I discuss there being more art in EA is the Utilitarian response of "why waste money on aesthetics", or I hear about stressed anxious EA's and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA, that's what's blocking it moving into the next era. You can run at a fast pace for a long time when you're young...but eventually it just isn't sustainable. That's my thesis...early EA was utilitarian awesome, but time moved on and now it's not sustainable anymore.
Changing yourself is hard, I've done it a few times, usually it was forced on me. And I totally get this is not obvious to most in EA...it's not popular to tell utilitarians, "don't be utilitarian"...but it's true, you should not be so utilitarian...because that's for robots, but you're human. It's time to move on to a more mature and sustainable path.
DPiepgrass @ 2023-06-20T14:57 (+10)
Well... Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves "strong men" and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip ("for the good of communism", they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isn't based on consequentialist reasoning. It's an exaggeration to say it's based on South Park reasoning: 1. overthrow the bourgeoisie and the government so communists can be in charge, 2. ???, 3. utopia! But I don't think this is a big exaggeration.
Individuals, on the other hand, can believe in whatever moral system they feel like and follow its logic wherever it leads. Taking care of yourself (and even your friends/family) not only perfectly fits within the logic of (consequentialist) utilitarianism, it is practical because its logic is consequentialist (which is always practical if done correctly). Unlike communism, we can simply do it (and in fact it's kind of hard not to, it's the natural human thing to do).
What's weird about your argument is that you made no argument beyond "it's like the logic of communism". No, different things are different, you can't just make an analogy and stop there (especially when criticizing logic that you yourself described as "perfect" - well gee, what hope does an analogy have against perfect logic?)
when I discuss there being more art in EA is the Utilitarian response of "why waste money on aesthetics", or I hear about stressed anxious EA's and significant mental health needs...the only clear answer I see to these two problems is reform the Utilitarian part of EA
I think what's going on here is that you're not used to consequentialist reasoning, and since the founders of EAs were consequentialists, and EA attracts, creates and retains consequentialists, you need to learn how consequentialists think if you want to be persuasive with them. I don't see aesthetics as wasteful; I routinely think about the aesthetics of everything I build as an engineer. But the reason is not something like "beauty is good", it's a consequentialist reason (utilitarian or not) like "if this looks better, I'm happier" (my own happiness is one of my terminal goals) or "people are more likely to buy this product if it looks good" (fulfilling an instrumental goal) or "my boss will be pleased with me if he thinks customers will like how it looks" (instrumental goal). Also, as a consequentialist, aesthetics must be balanced against other thingsâwe spend much more time on the aesthetics of some things than other things because the cost-benefit analysis discounts aesthetics for lesser-used parts of the system.
You want to reform the utilitarian part, but it's like telling Protestants to convert to Catholicism. Not only is it an extremely hard goal, but you won't be successful unless you "get inside the mind" of the people whose beliefs you want to change. Like, if you just explain to Protestants (who believe X) why Catholics believe the opposite of X, you won't convince most of them that X is wrong. And the thing is, I think when you learn to think like a consequentialistânot a naive consequentialist* but a mature consequentialist who values deontological rules and virtues for consequentialist reasonsâat that point you realize that this is the best way of thinking, whether one is EA or not.
(* we all still remember SBF around here of course. He might've been a conman, but the scary part is that he may have thought of himself as an consequentialist utilitarian EA, in which case he was a naive consequentialist. For you, that might say something against utilitarianism, but for me it illustrates that nuance, care and maturity is required to do utilitarianism well.)
Jeffrey Kursonis @ 2023-06-21T00:03 (+6)
Yes I appreciate very much what you're saying, I'm learning much from this dialogue. I think what I said that didn't communicate well to you and Brad West isn't some kind of comparison of utilitarianism and communist thought...but rather how people defend their ideal when it's failing, whatever it is...religion, etc. that, "They're not doing it right"..."If you did it right (as I see it) then it would produce much better stuff".
EA is uniquely bereft of art in comparison to all other categories of human endeavor: education, business, big tech, military, healthcare, social society, etc. So for EA there's been ten years of incredible activity and massive funding, but no art in sight...so whatever is causing that is a bug and not a feature. Maybe my thesis that utilitarianism is the culprit is wrong. I'd be happy to abandon that thesis if I could find a better one.
But given that EA "attracts, creates and retains consequentialists" as you say, and that they are hopefully not the bad kind that doesn't work (naive) but the good kind that works (mature) then why the gaping hole in the center where the art should be? I think it's not naive versus mature utilitarianism, it's that utilitarianism is a mathematical algorithm and simply doesn't work for optimizing human living...it's great for robots. And great for the first pioneering wave of EA blazing a new path...but ulitimately unsustainable for the future.
Eric Hoel does a far better job outlining the poison in utilitarianism that remains no matter how you dilute it or claim it to be naive or mature (but unlike him I am an Effective Altruist).
And of course I agree with you on the "it's hard to tell one religion to be another religion", which I myself said in my reply post. In fact, I have a college degree in exactly that - Christian Ministry with an emphasis in "missions' where you go tell people in foreign countries to abandon their culture and religion and adopt yours...and amazingly, you'd be surprised at how well it works. Any religious group that does proselytizing usually gets decent results. I don't agree with doing that anymore with religion, but it is surprisingly effective...and so I don't mind telling a bunch of utilitarians to stop being utilitarians...on the other hand if I can figure out a different reason for the debilitating lack of art in EA and the anxious mental health issues connected to not saving enough lives guilt, I'll gladly change tactics.
If you compare EA to all those other human endeavors I listed above, what's the point of differentiation? Why do even military organizations have tons of art compared to EA?
You seem to think if art was good for human optimization then consequentialists should have plenty, so why don't they around here?
Thanks for helping me think these things through.
DPiepgrass @ 2023-06-21T23:51 (+1)
Thanks for taking my comment in the spirit intended. As a noncentral EA it's not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe don't have lots of art majors.
Speaking personally, I'm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills... at least I started using AI art in my latest article about AI (well, duh). I write almost exclusively about things I think are very important, because that feeling of importance is usually what drives me to write. But the result has been that my audience has normally been very close to zero (even when writing on EA forum), which caused me to write much less and, when I do write, I tend to write on Twitter instead or in the comment areas of ACX. Okay I guess I'm not really going anywhere with this line of thought, but it's a painful fact that I sometimes feel like ranting about. But here's a couple of vaguely related hypotheses: (i) maybe there is some EA art but it's not promoted well so we don't see it; (ii) EAs can imagine art being potentially valuable, but are extremely uncertain about how and when it should be used, and so don't fund it or put the time into it. EAs want to do "the most impactful thing they can" and it's hard to believe art is it. However, you can argue that EA art is neglected (even though art is commonplace) and that certain ways of using art would be impactful, much as I argued that some of the most important climate change interventions are neglected (even though climate change interventions are commonplace). I would further argue that artists are famously inexpensive to hire, which can boost the benefit/cost ratio (related: the most perplexing thing to me about EA is having hubs in places that are so expensive it would pain me to live there; I suggested Toledo which is inexpensive and near two major cities, earning no votes or comments. Story of my life, I swear, and I've been thinking of starting a blog called "No one listens to me".)
Any religious group that does proselytizing usually gets decent results.
I noticed that too, but I assumed that (for unknown reasons) it worked better for big shifts (pagan to Christian) than more modest ones. But I mentioned "Protestant to Catholic" specifically because the former group was formed in opposition to the latter. I used to be Mormon; we had a whole doctrine about why our religion made more sense and was the True One, and it's hard to imagine any other sect could've come along and changed my mind unless they could counter the exact rationales I had learned from my church. As I see it, mature consequentialist utilitarianism is a lot like this. Unless you seem to understand it very well, I will perceive your pushback against it as being the result of misunderstanding it.
So, if you say utilitarianism is only fit for robots, I just say: nope. You say: utilitarianism is a mathematical algorithm. I say: although it can be put into mathematical models, it can also be imprinted deeply in your mind, and (if you're highly intelligent and rational) it may work better there than in a traditional computer program. This is because humans can more easily take many nuances into account in their minds than type those nuances into a program. Thus, while mental calculations are imprecise, they are richer in detail which can (with practice) lead to relatively good decisions (both relative to decisions suggested by a computer program that lacks important nuances, and relative to human decisions that are rooted in deontology, virtue ethics, conventional wisdom, popular ideology, or legal precedent).
I did add a caveat there about intelligence and rationality, because the strongest argument against utilitarianism that comes to mind is that it requires a lot of mental horsepower and discipline to be used well as a decision procedure. This is also why I value rules and virtues: an mathematically ideal consequentialist would have no need of them per se, but such a being cannot exist because it would require too much computational power. I think of rules and virtues as a way of computationally bounding otherwise intractable mental calculations, though they are also very useful for predicting public perception of one's actions (as most of the public primarily views morality through the lenses of rules and virtues). Related: superforecasters are human, and I don't think it's a coincidence that lots of EAs like forecasting as a test of intelligence and rationality.
However, I think that consequentialist utilitarianism (CU) has value for people of all intelligence levels for judging which rules and virtues are good and which are not. For example, we can explain in CU terms why common rules such as "don't steal" and "don't lie" are usually justified, and by the same means it is hard to justify rules like "don't masturbate" or the Third Reich's rule that only non-Jewish people of âGerman or kindred bloodâ could be citizens (except via strange axioms).
This makes it very valuable from a secular perspective: without CU, what other foundation is there to judge proposed rules or virtues? Most people, it seems to me, just go with the flow: whatever rules/virtues are promoted by trusted people are assumed to be good. This leads to people acting like lemmings, sometimes believing good things and other times bad things according to whatever is popular in their tribe/group, since they have no foundational principle on which to judge (they do have principles promoted by other people, which, again, could be good or bad). While Christians say "God is my rock", I say "these two axioms are my bedrock, which led me to a mountain I call mature consequentialist utilitarianism". I could say much more on this but alas, this is a mere comment in a thread and writing takes too much time. But here's a story I love about Heartstone, the magic gemstone of morality.
For predictive decision-making, choosing actions via CU works better the more processing power you use (whether mental or silicon). Nevertheless, after arriving at a decision, it should always be possible to explain the decision to people without access to the same horsepower. We shouldn't say "My giant brain determined this to be the right decision, via reasoning so advanced that your puny mind cannot comprehend it. Trust me." It seems to me that anyone using CU should be able to explain (and defend) their decision in CU terms that don't require high intelligence to understand. However, (i) the audience cannot verify that the decision is correct without using at least as much computing power, they can only verify that the decision sounds reasonable, (ii) different people have different values which can correctly lead to disagreement about the right course of action, and (iii) there are always numerous ways that an audience can misunderstand what was said, even if it was said in plain and unambiguous language (I suspect this is because many people prefer other modes of thought, not because they can't think in a consequentialist manner.)
Now, just in case I sound a bit "robotic" here, note that I like the way I am. Not because I like sounding like Spock or Data, but because there is a whole life journey spanning decades that led to where I am now, a journey where I compared different ways of being and found what seem to be the best, most useful and truth-centered principles from which to derive my beliefs and goals. (Plus I've always loved computers, so a computational framing comes naturally.)
a different reason for [...] the anxious mental health issues connected to not saving enough lives guilt[?]
I think a lot of EAs have an above-average level of empathy and sense of responsibility. My poth (hypothesis) is that these things are what caused them to join EA in the first place, and also caused them to have this anxiety about lives not saved and good not done. This poth leads me to predict that such a person will have had some anxiety from the first day they found out about the disease and starvation in Africa, even if joining EA managed to increase that anxiety further. For me personally, global poverty bothered me since I first learned about it, I have a deep yearning to improve the world that appeared 15+ years before I learned about EA, I don't feel like my anxiety increased after joining EA, and the analysis we're talking about (in which there is a utilitarian justification not to feel bad about only giving 10% of our income) helps me not to feel too bad about the limits of my altruism, although I still want to give much more to fund direct work, mainly because I have little confidence in my ability to persuade other EAs about what I think needs to be done (only 31 karma including my own strong upvote? Yikes! đłđą)
Why do even military organizations have tons of art compared to EA?
Is that true? I'm not surprised if military personnel make a lot of art, but I don't expect it from the formal structures or leadership. But, if a military does spend money on art, I expect it's a result of some people who advocated for art to sympathetic ears that controlled the purse strings, and that this worked either because they were persuasive or because people liked art. The same should work in EA if you find a framing that appeals to EAs. (which reminds me of the odd fact that although I identify strongly with common EA beliefs and principles, I have little confidence in my ability to persuade other EAs, as I am often downvoted or not upvoted. I cannot explain this.)
You seem to think if art was good for human optimization then consequentialists should have plenty, so why don't they around here?
My guess is that it's a combination of
- the difficulty EAs have had seeing art as an impactful intervention (although I feel like it could be, e.g. as a way of attracting new EAs and improving EA mental health). Note: although EAs like theoretical models and RCTs demonstrating good cost/benefit, my sense is that EA leaders also understand (in a CU manner) that some interventions are valuable enough to support even when there's no solid theoretical/scientific basis for them.
- artists rarely becoming EAs (why? maybe selection bias in membership drives... maybe artists being turned off by EA vibes for some reason...)
- EA being a young movement, so (i) lots of things still haven't been worked out and (ii) the smaller the movement is, the less likely that art is worthy of funding (the explanation for this assertion feels too complicated to briefly explain.)
- something else I didn't think of (???)
Jeffrey Kursonis @ 2023-06-22T20:07 (+2)
Wow thanks for your long and thoughtful reply. I really do appreciate your thinking and I'm glad CU is working for you and you're happy with it...that is a good thing.
I do think you've given me a little boost in my argument against CU unfortunately, though, in the idea that our brain just doesn't have enough compute. There was a post a while back from a well know EA about their long experience starting orgs and "doing EA stuff" and how the lesson they'd taken from it all is that there are just too many unknown variables in life for anything we try to build and plan outcomes for to really work out how we hoped...it's a lot of shots in the dark and sometimes you hit. That is similar to my experience as well...and the reason is we just don't have enough data nor enough compute to process it all...nor adequate points or spectrums of input. The thing that better fits in that kind of category is a robot who with an AI mind can do far more compute...but even they are challenged. So for me that's another good reason against CU optimizing well for humans.
And the other big thing I haven't mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness...I think the attraction of CU is that it adds to us the logic side that our inner life doesn't always have...and so the answer is to live with both together...to use CU thinking for the effective things it does, but also to realize where it is very ineffective toward human thriving...and so that may be similar to the differences you see between naive and mature CU. Maybe that's how we synthesize our two views.
How I would apply this to the Original Post here is that we should see "the gaping hole where the art should be" in EA as a form of evidence of a bug in EA that we should seek to fix. I personally hope as we turn this corner toward a third wave, we will include that on the list of priorities.
DPiepgrass @ 2023-06-27T21:14 (+3)
Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc.
You could do this, but you'd be arguing axiomatically. A claim like "my axioms are above those of utilitarians!" would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
You could say something like the production of art/beauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
The most important thing to realize is that "things with intrinsic value" is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that "art is intrinsically valuable". Calling it "utilitarian" feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesn't exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universeâno people, no life, no souls/God/heaven/hell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilization's art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 peopleâthough one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isn't tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I don't see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
And the other big thing I haven't mentioned is our mysterious inner life, the one that responds to spirituality and to emotions within human relationships, and to art...this part of us does not follow logic or compute...it is somehow organic and you could almost say quantum in how we are connected to other people...living with it is vital for happiness
I think it's important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! I'll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs weren't nearly as interested as I was. I would've argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we don't respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didn't make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldn't care). I ended up thinking deeply about the warâabout what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians weren't getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war I've seen are likeâholy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripley's Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I should've been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didn't have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldn't accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CU's name be praised! đ Also I don't really feel guilty about it, I just think "well, I'm human, I'll make some mistakes and no one's judging me anyway, hopefully I'll do better next time."
In sum: humans can't meet the ideals of (M)CU, but that doesn't mean (M)CU isn't the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
Edit: P.S. a relevant bit of the Consequentialism FAQ:
5.6: Isn't utilitarianism hostile to music and art and nature and maybe love?
No. Some people seem to think this, but it doesn't make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.
There's a more comprehensive treatment of this objection in 7.8 below.
Brad West @ 2023-06-21T19:50 (+1)
If art production is critical to EA's ability to maximize well-being and EA is failing to do so, then this is a failure of EA not to be utilitarian enough. Your criticism perhaps stems from the culture and notions of people who happen to subscribe to utilitarianism, not utilitarianism itself. Utilitarians are human, and thus capable of being in error as to what will do the most good.
If you want to criticize utilitarianism itself, you would have to say the goal of maximizing well-being should be constrained or subordinated by other principles/rules, such as requirements of honesty or glorifying God/etc. You could say something like the production of art/beauty is intrinsically valuable apart from the well-being it produces and thus utilitarianism is flawed in that it fails to capture this intrinsic value (and only captures the instrumental value.
I think a more apt target for your criticism would not be utilitarianism itself, but rather the cultures and mentalities of those who practice it.
Brad West @ 2023-06-20T16:45 (+4)
I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being... If that means adding more ice cream or art into agents' lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal... Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as "advancing the welfare of people within a state." Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if you're saying that the habits of EAs are not sustainable (and thus aren't doing the greatest good, ultimately), you're not criticizing utilitarianism. Rather, you're saying they are not being the best utilitarians they can be. You can't challenge utilitarianism by saying that utilitarians' choices don't produce the most good. Then you're just challenging choices made by them within a utilitarian lens.
Linch @ 2023-06-21T00:46 (+4)
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system there's no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now "historically utilitarianism has led to less utility" does not strictly imply that in the future "belief in utilitarianism predictably causes the world to have less utility." But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
I'm personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isn't much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askell's distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
Chris Leong @ 2024-02-07T15:01 (+5)
If EA decided to pursue the politics and civil society route, I would suggest that it would likely make sense to follow a strategy similar to what the Good Ancestors Project has been following in Australia. This project has done a combination of a) outreach to policy-makers b) co-ordinating an open letter to the government c) making a formal submission to a government inquiry d) walking EA's through the process of making their own submissions (you'd have to check with Greg to see if he still thinks all of these activities are worthwhile).
Even though AI Policy seems like the highest priority at the moment, there are benefits of working on multiple cause areas since a) you can only submit to an inquiry when one is happening, so more cause areas increases the chance that there is something relevant b) there's a nice synergy that comes from getting EA's who have different cause areas as their main focus to submit to the inquiries for other areas.
Greg has a great explanation where he talks about EA having spent a lot of effort figuring out how to leverage our financial capital and our career capital to make the world better, but that we've been neglecting our political capital. Obviously there's the question of whether we have good ways to deploy that capital, but I suspect that this answer is that we do.
I'm not claiming that this is necessarily the route forward, but it is likely worth exploring in countries with well-developed EA communities.
SebastianSchmidt @ 2023-07-02T18:01 (+5)
Thanks for the model - I think it's useful.
I think it'd probably be more appropriate to say that wave 2 was x-risk (and not broad longtermism) and/or that longtermism became x-risk. Before reading your thoughts on the possibilities for the third wave, I spent a few seconds developing my thoughts. The thoughts were:
1. Target audience: More focus on Global South/LMIC.
2. Culture: Diversification and more ways of living (e.g., the proportion of Huel drinkers go down).
3. Call-to-action: A higher level community/set of ideas (e.g., distilling and formalizing the method of EA and/or longtermism) and cause-specific community (AI safety, etc.).
4. Other: EA as a label gets reduced (e.g., if CEA changes its name).
ElliotJDavies @ 2023-08-09T13:03 (+4)
Substantially less money, through a combination of Meta stock falling [...]
I have also being talking about META stock falling. But when I looked it up recently, I noticed META is close to an all time high.
GidiKadosh @ 2023-06-24T18:01 (+4)
Thank you for writing this!
I just wanted to flag that this format could fixate us on the structure of our past strategy.
For instance (and this is just one example out of many), I believe that the past strategy of the movement was inherently incoherent;
From the way you describe the second wave, it's clear that the movement was focused on "career changes" of "talent" into "longtermism". However, the movement didn't describe itself as a longtermism hiring agency, it described itself as a community of people who seek the most impactful courses of action. This post and comment describe this criticism in length.
If this criticism makes sense, then we might consider a third wave that looks something like:
- We've split the brand of EA from a few cause-specific brands that focus on the most effective interventions in their area (e.g., effective AI safety, effective climate change, and so on).
- The new EA brand focuses on the education of EA tools and principles (and maybe also refers people to the resources of the other cause-specific community).
- Each other cause-specific brand has its own call-to-actions, focus audiences, and so on.
For instance, AI safety's call-to-action could be research, animal advocacy's call-to-action could be donations, and so on.
I think that this format of waves is great for brainstorming, and I'm very happy that auch brainstorming is happening. However, this is just one example of why our former strategy might have been suboptimal, and how this format could fixate us on similar directions.
JWS @ 2023-06-20T22:02 (+3)
Great post Ben, and I think the idea of 'EA waves' is a useful framing even if not ~100% historically accurate
Object-level answer: I have a lot of sympathy with Zoe Cremer's idea that the next phase of EA should be to embrace an 'institutional turn', both in our own institutions but also regarding how we approach being effective at our other cause areas. However, as IIDM is probably the area I'm most interested in, take this with a large degree of bias and discount accordingly!! I would still suggest Forum readers check out sources that Cremer highlights as promising - e.g. the work of Audrey Tang in Taiwan, or Helene Landemore's research agenda at Yale.
Meta-level question: I think it's interesting to me that this frame is easily accepted in terms of the first 'bednet' wave being replaced by the second 'AI/longtermism' wave. I agree that there has been a change here, but I think this change may have been reified somewhat. To what extent was early EA eden before the fall compared to EA now? Surely some of that change is honestly people changing their minds? Furthermore, a lot (most?) EA funding still goes to GH&D, we're still all about bednets! (I know you talk about 'flagship' cause areas in this post, but I often see people push this point to its biggest extreme in discussion, but maybe I'm overreacting here)
Jason @ 2023-06-18T18:32 (+2)
Conditioned on this frame being roughly correct, it raises the question of whether to expect a fourth wave at some point -- and what the strategic implications of predicting a wave roughly every seven years or so (albeit with low confidence in the frequency estimate) might be.
To address that, I propose a (literal) toy model of EA as a board game. I have Settlers of Catan in mind, but other games may work. The game has the following characteristics:
- Resource allocation is an important part of the game. Your resources are wood, brick, sheep, wheat, and ore (money, talent, new ideas, public support, organizational capacity?). You can invest in facilities that will produce a certain type of resource, but the lead time before those facilities start producing can be considerable. There are complex rules by which you can convert existing resources into others, but it's often not very efficient and usually takes a a turn (year) or two.
- About every seven turns, the game enters a new phase in which the objectives and scoring rules of the game, as well as some game mechanics, significantly change. Some resources may become easier or harder to acquire, and some may become much more or less important to scoring well.
- The person running the game will tell you when a new phase has started . . . but may not tell you for a few turns more what the new objectives/scoring rules are. You can make educated guesses about what new objectives and scoring rules might now be in play, but cannot have high confidence in your guesses.
- The consequences of being short on a specific resource will depend on the specific rule changes. Resources may be complements (e.g., you need grain and ore to build a city), co-factors (e.g., you need a little sheep to build a port), or have other interactions.
- But in many cases the resource:impact relationship will not be linear. For instance, it's really painful to not have any wood or brick production in the standard rules of Catan; you get a lot of utility out of the first marginal units of production.
In this model, a rational player will significantly consider the universe of plausible new phases when making decisions about current resource stocks and production. In particular, the player will likely devote some attention to procuring a minimum level of supply for each resource that may be critical under a plausible rule change.
Coming back to the real world, the toy model suggests that once the contours of the third wave are better defined, it might be better not to focus EA "resource production" as heavily on its particular needs as was perhaps done at certain points in the second wave. Rather, ensuring a basic supply of potentially critical resources for the fourth wave would also be an important goal.