If EA Community-Building Could Be Net-Negative, What Follows?
By joshcmorrison @ 2023-01-02T19:21 (+153)
I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence[1] and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact,[2] and I wanted to explore some conjectures about what that plausibility would entail.
I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.
Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:
- To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.
- Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.
- Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.
- Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.
- FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.
- EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.
Here are some conjectures I’d make for potential implications of believing my plausibility claim:
- Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.
- Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “meta” (or “community building”) donation amounts as separate numbers (e.g. “I gave X to charity this year in total of which, Y was to EA front-line stuff, Z to EA community stuff, and W was non-EA stuff”). I think there may be intelligent principles to develop about how the amounts of EA front-line funding and meta-level funding should relate to one another, but I have less of a sense of what those principles might be than a belief that starting to account for them as separate types of activities in separate categories will be productive.
- Integrate Future Community Building More Closely with Front-Line Work: Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger. For example, personally I hope for some of my kidney donation, challenge trial recruitment, and Rikers Debate Project work to have significant EA community-building upshots, even though that meta level is not those projects’ main goal or the metric I use to evaluate them. For what it’s worth, I think pursuing “double effect” strategies (e.g projects that simultaneously have near-termist and longtermist targets or animal welfare and forecasting-capacity targets) is underrated in current EA thinking. I also think connecting EA recruitment to direct work may mitigate certain risks of community building (e.g. the risks of creating an EA apparatchik class, recruiting “EAs” not sufficiently invested in having an actual impact, or competing with direct work for talent)
- Implement Carla Zoe Cremer’s Recommendations: Maybe I’m biased because we’re quoted together in some of the same articles but I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing (e.g. whistleblowing protections). Some (such as democratizing funding decisions) are more complicated to implement, and I acknowledge the concern that these procedural measures create friction that could reduce the efficacy of EA organizations, but I think (a) minimizing unnecessary burden is a design challenge likely to yield fairly successful solutions and (b) FTX clearly strengthens the arguments in favor of bearing the cost of that friction. Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
- Consideration of a “Pulse” Approach to Funding EA Community Building: It may be the case that large EA funders should do time-limited pulses of funding towards EA community building goals or projects with the intention of building institutions that can sustain themselves off of separate funds in the future. The logic of this is: (a) insofar as EAs may be bad judges of the value of our own community building, requiring something appealing to external funders helps check that bias, (b) creating EA community institutions that must be attractive to outsiders to survive may avoid certain epistemic and political risks inherent to being too insular
- EA as a Method and not a Result: The concept of effective altruism (rationally attempting to do good) has broad consensus but particular conceptions may be parochial or clash with one another.[3] A “thinner” effective altruism that emphasizes EA as an idea akin to the scientific method rather than a totalizing identity or community may be less vulnerable to FTX-like mistakes.
- Develop Better Logic for Weighing Harms Caused by EA against EA Benefits: An EA logic that assumes resources available to EAs will be spent at (say) GiveWell benefit levels (which I take to be roughly $100/DALY or equivalent) but that resources available to others are spent at (say) US government valuations of a statistical life (I think roughly $100,000/DALY) seems to justify significant risks of incurring very sizable harms to the public if they are expected to yield additional resources for EA. Clearly, EA's obligations to avoid direct harms (or certain types of direct harms) are at least somewhat asymmetric to obligations/permissions to generate benefits. But at the same time, essentially any causal act will have some possibility of generating harm (which in the case of systemic change efforts can be quite significant), so a precautionary principle designed in an overly simplistic way would kneecap the ability of EAs to make the world better. I don't know the right answer to this challenge, but clearly "defer to common sense morality" has proven insufficient, and I think more intellectual work should be done.
I'm not at all certain about the conjectures/claims above, but I think it's important that EA deals with the intellectual implications of the FTX crisis, so I hope they can provoke a useful discussion.
- ^
Am basing this on reporting in Semafor and the New Yorker. To be clear, I'm not saying that once you assume Alameda/FTX's existence, the ideology of effective altruism necessarily made it more likely that those entities would commit fraud. But I do think it is unlikely they would have existed in the first place without the support of institutional EA.
- ^
To be clear, my claim is not "the impact of the FTX fraud incident plausibly outweighs benefits of EA community building efforts to date" (though that may be true and would be useful to publicly disprove if possible) but that the FTX fraud should demonstrate there are a range of harms we may have missed (which collectively could plausibly outweigh benefits) and that "investing in EA community building is self-evidently good" is a claim that needs to be reexamined.
- ^
I find the distinction between concept and conception to be helpful here. Effective altruism as a concept is broadly unobjectionable, but particular conceptions of what effective altruism means or ought entail involve thicker descriptions that can be subject to error or clash with one another. For example, is extending present-day human lifespans default good because human existence is generally valuable or bad because doing so tends to create greater animal suffering that outweighs the human satisfaction in the aggregate? I think people who consider the principles of effective altruism important to their thinking can reasonably come down on both sides of that question (though I, and I imagine the vast majority of EAs, believe the former). Moreover efforts to build a singular EA community around specific conceptions of effective altruism will almost certainly exclude other conceptions, and the friction of doing so may create political dynamics (and power-seeking behavior) that can lead to recklessness or other problems.
CarlaZoeC @ 2023-01-04T15:32 (+119)
ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2.
- i'm rather sick of hearing from EAs that i'm arguing in bad faith
- if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse to name names in respect to specific actions I criticise or continue to get passed information about, because I want to make sure the debate is not about individuals but about incentives/structures
- a note on me exploiting the moment of FTX to get media attention
- really?
- please join me in speaking with the public or with journalists, you'll see it's no fun at all doing it. i have a lot of things i'd rather be doing. many people will be able to confirm that i've tried to convince them to speak out too but i failed, likely because
- it's pretty risky because you end up having rather little control over how your quotes will be used, so you just hope to work with someone who cares, but every journalist has a pre-conception of course. it's also pretty time consuming with very little impact and then you have to deal with forum debates like this one. but hey if anyone want to join me, I encourage anyone who want to speak to the press to message me and I'll put you in touch.
- the reason I do it is because I think EA will 'work' just not in the way that many good people in it intend it to work
3.
- I indeed agree that these measures are not 'proven' to be good because of FTX
- i think they were a good idea before FTX and they continue to be good ideas
- they are not 'my' ideas, they are absolutely standard measures against big bureaucracy misconduct
- i don't want anyone to 'implement my recommentions' just because they're apparently mine (they are not), they are a far bigger project than a single person should handle and my hope was that the EA community would be full of people who'd maybe take it as inpiration and do something with it in their local context - it would then be their implmentation.
- i like the responses I had on twitter that were saying that FTX was in fact the first to do re-granting
- I agree and I thought that was great!
- in fact they were interested in funding a bunch of projects I care a lot about, including a whole section on 'epistemics'! I'm not sure it was done for the right reasons (maybe the incentive to spend money fast was also at play), and the re-granting was done without any academic rigor, data collection or metrics about how well it works (as far as I know), but I was still happy to see it
- I don't see how this invalidates the claim that re-granting is a good idea though
4.
- those who only want to know if my recommendations would have prevented this specific debacle are missing the point. someone may have blown the whistle, some transparency may have helped raise alarms, fewer people may have accepted the money, distributed funding may have meant more risk averse people would have had a say about whether to accept the money - or not. risk reduction is about reduction, not bringing it down to 0. so, do those measures, depending on how they're set up, reduce risk? yes I can see how they would, e.g. is it true that there were slack messages on some slack for leaders which warned against SBF, or is it true that several orgisations decided (but don't disclose why) against taking FTX funding https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity? I don't know enough about the people involved to say what each would have needed to be incentivised to be more public about their concerns. but do you not think it would have been useful knowledge to have available, e. g. for those EA members who got indiv grants and made plans with those grants?
even if institutional measures would not have prevented the FTX case, they are likely to catch a whole host of other risks in the future.
5.
-The big mistake that I am making is to not be an EA but to comment on EA. It makes me vulnerable to the attack of "your propositions are not concrete enough to fix our problems, so you must be doing it to get attention?" I am not here trying to fix your problems.
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as 'humanity' or 'future beings'. That means that even if I don't want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it's not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
6.
- i don't say anywhere that 'every procedure ought to be fully democratised' or 'every organisation has to have its own whistleblower protection scheme' - do i?
- *clearly* these are broad arguments, geered towards starting a discussion across EA, within EA institutions that need to be translated into concrete proposals and adjustments and assessments that meet each contextual need
- there's no need to dismiss the question of what procedures actually lead to the best epistemic outcomes by arguing that 'democratising everything' would bring bureaucracy (of course it would and no one is arguing for that anyway)
- for all the analyses of my tweets, please also look at the top page of the list of recommendations for reforms , it says something like "clearly this needs to be more detailed to be relevant but I'll only put in my free time if I have reason to believe it will be worth my time". There was no interest by Will and his team to follow up with any of it, so I left it at that (i had sent another email after the meeting with some more concrete steps necessary to at least get data, do some prototyping and reserach to test some of my claims about decentralised funding, and in which I offered I could provide advice and help out but that they should employ someone else to actually lead the project). Will said he was busy and would forward it to his team. I said 'please reach out if you have any more questions' and never heard from anyone again. It won't be hard to come up with concrete experiments/ideas for a specific context/organisation/task/team but I'm not sure why it would be productive for me to do that publically rather than at the request of a specific organisation/team. If you're an EA who cares about EA having those measures in place, please come up with those implemenation details for your community yourself.
7.
- I'd be very happy to discuss details of actually implementing some of these proposals for some particular contexts in which I believe it makes sense to try them. I'd be very happy to consult organizations that are trying to make steps in those directions. I'd be very happy to engage with and see a theoretical discussion about the actual state of the reserach.
But none of the discussions that I've seen so far are actually on the level of detail that would match the forefront of the experimental data and scholarly work that I've seen so far. Do you think scholars of democratic theory have not yet thought about a response to the typical 'but most people are stupid'? Everyone who dismisses decentralised reasoning as a viable and epistemically valuable approach, should at least engage with the arguments by political scientists (I've cited a bunch in previous publications/twitter, here again, e.g. Landemore, Hong&Page are a good start) who spent years on these questions (ie not me) and then argue on their level to bring the debate forward if they then still think they can.
8.
Jan, you seem particularly unhappy with me, reach out if you like, I'm happy to have a chat or answer some more questions.
Devin Kalish @ 2023-01-04T20:15 (+36)
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
CarlaZoeC @ 2023-01-04T21:05 (+11)
Thank you for taking the time to write this up, it is encouraging - I also had never thought to check my karma ...
Lukas_Gloor @ 2023-01-04T16:57 (+10)
It would be a bit rude to focus on a minor part of your comment after you posted such a comprehensive reply, so I first want to say that I agreed with some of the points.
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as 'humanity' or 'future beings'. That means that even if I don't want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it's not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
The problem I have with this framing is that it "punishes" EA (by applying isolated demands of "justify yourselves") for its ambitious attempts to improve the world, while other groups of people (or other ideologies) (presumably?) don't have to justify their inaction. And these demands come at a time when EA doesn't even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see "let's scrutinize EA and EAs" as a main priority!)
The longtermist EA worldview explicitly says that the world is broken and on a bad trajectory, so that we're pretty doomed if nothing changes soon. (Or, at least it says that we're running an unjustified level of unnecessary risks if we don't change anything soon – not all EAs are of the view that existential risks are >10%.)
If this worldview is correct, then what you're demanding is a bit like going up to Frodo and his companions and stalling them to ask a long list of questions about their decision procedures and how much they've really thought through about what they're going to do once they have achieved more of their aims, and if they'd not rather consult with the populations of the Shire or other lands in Middle earth to decide what they should do. All of that at a time when the companionship is still mostly just in Rivendell planning for future moves (and while Sauron is getting ready to kill lots of humans, elves, and dwarves).
Of course, the communists also claimed that the world is broken when they tried to convince more people to join them and seek influence to make the world better. So, I concede that it's not a priori unreasonable to consider it urgent and morally pressing, when you come across a group of self-proclaimed world-improvers on an upwards trajectory (in terms of real-world influence), to scrutinize if they have their head together and have the integrity needed to in expectation change things for the better rather than for the worse.
The world is complicated; it matters to get things right. Sometimes self-proclaimed world improvers are the biggest danger, but sometimes the biggest danger is the reason why self-proclaimed world improvers are hurrying around doing stuff and appear kind of desperate. You can be wrong in both directions:
- slow down* Frodo at a point where it's (a) anyway unlikely that he'll succeed and (b) totally a dumb use of time (edit) to focus on things that only ever become a priority if Middle earth survives Sauron, (edit_end) given the imminent threat of Sauron
- fail to apply scrutiny to the early Marxists despite (a) there were already signs of them becoming uncannily memetically successful with a lot of resentfulness in the societal undercurrent (which is hard to control) and (b) the "big threat" was 'just' Capitalism and not Sauron. (One thing to say about Capitalism is "it works," and it seems potentially very risky to mess with systems that work.)
*Not all types of criticism / suggestions for improvement are an instance of "slowing down." What I'm criticizing here is the attitude of "you owe us answers" rather than "here's some criticism, would be curious for replies, especially if more people agree with my criticism (in which case the voices calling replies will automatically grow/become louder)."
Journalists will often jump towards the perspective** that's negative and dismissive of EA concerns because that fits into existing narratives and because journalist haven't thought about the EA worldview in detail (and their primary reaction to things like AI risk is often driven by absurdity heuristics rather than careful engagement). You, by contrast, have thought through these things. So, I'd say it's on you to make an attempt to at least present EA in a fair light – though I of course understand that, as a critic of EA, it's reasonable that this isn't your main priority. (And maybe you've tried this – I understand it's hard to get points across with some journalists.)
**One unfortunate thing about some of the reporting on EA is also that journalists sometimes equate EA with "Silicon valley tech culture," even though the latter is arguably what EAs are to some degree in tension with (AI capabilities research and "tech progress too fast before wisdom/foresight can catch up.") That makes EA seem powerful so you can punch upwards at it, when it fact EA is still comparatively small. (And smaller now after recent events.)
CarlaZoeC @ 2023-01-04T17:59 (+25)
Indeed Lukas, I guess what I'm saying is: given what I know about EA, I would not entrust it with the ring .
Chris Leong @ 2023-01-05T03:17 (+7)
I can understand why you mightn't trust us, but I would encourage EA's to consider that we need to back ourselves, even though I've certainly been shaken by the whole FTX fiasco. Unfortunately, there's an adverse selection effect where the least trustworthy actors are unlikely to recurse themselves in terms of influence, so if the more trustworthy actors recurse themselves, we will end up with the least responsible actors in control.
So despite the flaws I see with EA, I don't really see any choice apart from striving as hard as we can to play our part in building a stronger future. After all, the perfect is the enemy of the good. And if the situation changes such that there are others better equipped than us to handle these issues and who would not benefit from our assistance, we should of course recurse ourselves, but sadly I believe this is unlikely to happen.
Jason @ 2023-01-05T17:34 (+16)
I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Sharmake @ 2023-01-08T00:44 (+4)
Indeed Lukas, I guess what I'm saying is: given what I know about EA, I would not entrust it with the ring
I don't understand what this means, exactly.
If you're talking about the literal one ring from LOTR, then yeah EA not being trustworthy is vacuously true, since no human without mental immunity feats can avoid being corrupted.
bruce @ 2023-01-04T18:06 (+9)
With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives:
"- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as 'humanity' or 'future beings'. That means that even if I don't want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it's not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones."
The problem I have with this framing is that it "punishes" EA (by applying isolated demands of "justify yourselves") for its ambitious attempts to improve the world, while other groups of people (or other ideologies) (presumably?) don't have to justify their inaction. And these demands come at a time when EA doesn't even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see "let's scrutinize EA and EAs" as a main priority!)
Immoral? This is a surprising descriptor to see used here.
The standard of "justify yourselves" to a community soup kitchen, or some other group / ideology is very different to the standard of "justify yourselves" to a movement apparently dedicated to doing the most good it can for those who need it most / all humans / all sentient beings / all sentience that may exist in the far future. The decision relevant point shouldn't be "well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives." Like - what follows? Are you suggesting we should then ignore this because other groups don't do this? Or because critics of EA don't symmetrically apply these criticisms to all groups around the world?
The questions (imo) should be something like - are these actions beneficial in helping EA be more impactful?[1] Are there other ways of achieving the same goals better than what's proposed? Are any of these options worth the costs? I don't see why other groups' inaction justifies EA's, if it's the case that these actions are in fact beneficial.
And these demands come at a time when EA doesn't even have that much power/influence. (If EA were about to work out the constitution of a world government about to be installed, then yeah, it would very much be warranted – both for EA outsiders and insiders – to see "let's scrutinize EA and EAs" as a main priority!)
If EA wants to be in a position to work out the constitution of a world government about to be installed, it needs to first show outsiders that it's more than a place of interesting intellectual ideas, but a place that can be trusted to come up with interventions and solutions that will actually work in practice. If the standard for "scrutinising EA" is when EA is about to work out the constitution of a world government about to be installed, it is probably already too late.
What I'm criticizing here is the attitude of "you owe us answers" rather than "here's some criticism, would be curious for replies, especially if more people agree with my criticism (in which case the voices calling replies will automatically grow/become louder)."
I don't want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider's perspective it seems pretty clear to me that Carla did engage in a good faith "EA-insider" way, even if you don't think she's expressing criticism in a way you like now. But again - if you think EA is actually analogous to Frodo and responsible for saving the world, of course it would be reasonable for outsiders to take strong interest in what your plan is, and where it might go wrong, or be concerned about any unilateral actions you might take - they are directly impacted by what you choose to do with the ring, they might be in a position to greatly help or hinder you. For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
More generally, EA should remain open to criticism that isn't delivered at your communication norms, and risks leaving value on the table if it ignores criticism solely because it isn't expressed in an attitude that you prefer.
- ^
e.g. via more trust within the community at those who are steering it, more trust from external donors, more trust from stakeholders who are affected by EA's goals, or some other way?
Lukas_Gloor @ 2023-01-04T19:52 (+44)
Immoral? This is a really surprising descriptor to see used here.
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
The question shouldn't be "well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives." Like - what follows?
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that "it's important to have good institutions" is something EA owes to outsiders is what seems weird to me. Doesn't this framing kind of suggest that EAs couldn't motivate themselves to try their best if it weren't for "institutional safeguards." What a depressing view of humans, that they can only act according to their stated ideals if they're watched at every step and have to justify themselves to critics!
EAs have discussions about governance issues EA-internally, too. It's possible (in theory) that EA has as many blindspots as Zoe thinks, but it's also possible that Zoe is wrong (or maybe it's something in between). Either way, I don't think anyone in EA, nor "EA" as a movement, has any obligation to engage in great detail with Zoe's criticisms if they don't think that's useful.* (Not to say that they don't consider the criticism useful – my impression is that there are EAs on both sides, and that's fine!)
If a lot of people agree with Zoe's criticism, that creates more social pressure to answer to her points. That's probably a decent mechanism to determine what an "appropriate" level of minimally-mandatory engagement should be – though it depends a bit whether the social pressure comes from well-intentioned people who reasonably informed about the issues or whether some kind of "let's all pile on these stupid EAs" dynamics emerge. (So far, the dynamics seem healthy to me, but if EA keeps getting trashed in the media, then this could change.)
*(I guess if someone's impression of EA was "group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future," then it would be reasonable for them to go like, "wtf, if that's your movement's plan, I'm concerned!" However, that would be a strawman impression of EA. Most EAs endorse moral views according to which individual preferences matter and "eudaimonia" is basically "everyone gets what they most want." Besides, even the few hedonist utilitarians [or negative utilitarians] within EA think preferences matter and argue for being nice to others with different views.)
The questions should just be - are these actions beneficial in helping EA be more impactful? [1] Are there other ways of achieving the same goals better than what's proposed? Are any of these options worth the costs? I don't see why other groups' inaction justifies EA's, if it's the case that these actions are in fact beneficial.
I don't disagree with this part. I definitely think it's wise for EAs to engage with critics, especially thoughtful critics, which I consider Zoe to be one of the best examples of, despite disagreeing with probably at least 50% of her specific suggestions.
I don't want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider's perspective it seems pretty clear to me that Carla did engage in a good faith "EA-insider" way, even if you don't think she's expressing criticism in a way you like now.
While I did use the word "immoral," I was only commenting on the framing Zoe/Carla used in that one particular paragraph I quoted. I definitely wasn't describing her overall behavior!
In case you want my opinion, I am a bit concerned that her rhetoric is often a bit "sensationalist" in a nuance-lacking way, and this makes EA look bad to journalists in a way I consider uncalled for. But I wouldn't label that "acting in bad-faith;" far from it!
But again - if you think EA is actually analogous to Frodo and responsible for saving the world, of course it would be reasonable for outsiders to take interest in what your plan is, and where it might go wrong - they are directly impacted by what you choose to do with the ring, they might be in a position to greatly help or hinder you. For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
Yeah, I agree with all of that. Still, in the end, it's up to EAs themselves to decide which criticisms to engage with at length and where it maybe isn't so productive.
For example, they might want someone more capable to deliver the ring, and not just the person who happened to inherit it from his cousin.
In the books (or the movies), this part is made easy by having a kind and wise old wizard – who wouldn't consider going with Gandalf's advice a defensible decision-procedure!
In reality, "who gets to wield power" is more complicated. But one important point in my original comment was that EA doesn't even have that much power, and no ring (nor anything analogous to it – that's a place where the analogy breaks). So, it's a bit weird to subject EA to as much scrutiny as would be warranted if they were about to enshrine their views into the constitution of a world government. All longtermist EA is really trying to do right now is trying to ensure that people won't be dead soon so that there'll be the option to talk governance and so on later on. (BTW, I do expect EAs to write up proposals for visions of AI-aided ideal governance at some point. I think that's good to have and good to discuss. I don't see it as the main priority right now because EAs haven't yet made any massive bids for power in the world. Besides, it's not like whatever the default would otherwise be has much justification. And you could even argue that EAs have done the most so far out of any group promoting discourse about important issues related to fair governance of the future.)
bruce @ 2023-01-04T22:58 (+8)
Thanks for sharing! We have some differing views on this which I will focus on - but I agree with much of what you say and do appreciate your thoughts + engagement here.
Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren't spending their time lobbying against those larger issues. But there might be other explanations for their focus - any given individual's sphere of influence, tractability, personal identity, and others can all be factors that contribute here.
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that "it's important to have good institutions" is something EA owes to outsiders is what seems weird to me. Doesn't this framing kind of suggest that EAs couldn't motivate themselves to try their best if it weren't for "institutional safeguards."
"It's important to have good institutions" is clearly something that "serious EAs" are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren't taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned.
What a depressing view of humans, that they can only act according to their stated ideals if they're watched at every step and have to justify themselves to critics!
I think this goes too far, I don't think this is the claim being made. The standard is just "would better systems and institutional safeguards better align EA's stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?". My guess is you probably agree with this though?
Either way, I don't think anyone in EA, nor "EA" as a movement, has any obligation to engage in great detail
I guess if someone's impression of EA was "group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future"
Nitpick: while I agree that it would be a strawman, it isn't the only scenario for outsiders to be concerned. There are also people who disagree with some longtermists vision of the future, there are people who think EA's general approach is bad, and it could follow that those people will think $$ on EA causes are poorly spent and should be spent in [some different way]. There are also people who think EA is a talent drain away from important issues. Of course, this doesn't interact with the extent to which EA is "obligated" to respond, especially because many of these takes aren't great. I agree that there's no obligation, per se. But the claim is "outsiders are permitted to ASK you to fix your problems", not that you are obligated to respond (though subsequent sentences RE: "I can demand" or "you should" might be a source of miscommunication).
I guess the way I see it is something like - EA isn't obligated to respond to any outsider criticism, but if you want to be taken seriously by these outsiders who have these concerns, if you want buy-in from people who you claim to be working with and working for, if you don't want people at social entrepreneurship symposiums seriously considering questions like "Is the way to do the most good to destroy effective altruism?", then it could be in your best interest to take good-faith criticisms and concerns seriously, even if the attitude comes across poor, because it likely reflects some barrier in you achieving your goals. But I think there probably isn't much disagreement between us here.
Cullen_OKeefe @ 2023-01-04T20:47 (+4)
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
I think this is an undervalued idea. But I also think that there's a distinct but closely related idea, which is valuable, which is that for any Group X with Goal Y, it is nearly always instrumentally valuable for Group X to hear about suggestions about how it can better advance Goal Y, especially from those who believe that Goal Y is valuable. Sometimes this will read as (or have the effect of) disincentivizing adopting Goal Y (because it leads to criticism), but in fact it's often much easier to marginally improve the odds of Goal Y being achieved by attempting to persuade Group X to do better at Y than to persuade Group ~X who believes ~Y. I take Carla Zoe to be doing this good sort of criticism, or at least that's the most valuable way to read her work.
Cullen_OKeefe @ 2023-01-04T21:20 (+4)
I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both:
- Probably undesirable to implement in practice because any criticism will have some disincentivizing effect.
- Probably violated by your comment itself, since I'd guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely to be labeled as immoral.
This is just to say that I value the general maxim you're trying to advance here, but "never" is way too strong. Then it's just a boring balancing question.
Lukas_Gloor @ 2023-01-04T21:49 (+6)
"Never" is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don't feel like I was discouraging criticism. Basically, my point wasn't about the act of criticizing at all, it was only about an added expectation that went with it, which I'd paraphrase as "EAs are doing something wrong unless they answer to my concerns point by point."
Cullen_OKeefe @ 2023-01-04T22:00 (+4)
I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.)
Ah, okay. That seems more reasonable. Sorry for misunderstanding.
Jason @ 2023-01-04T19:56 (+4)
I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
I think the LOTR analogy is inapt. Taking Zoe's comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That's closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so.
I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don't suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don't have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff.
Lukas_Gloor @ 2023-01-04T20:48 (+4)
However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it's a leap of logic to go from "because your stated ambition is to do risk analysis for all of us" to "That means that even if I don't want to wear your brand, I can demand that you answer the questions of [...]" – even if we add the hidden premise "this is about expected harms caused by EA." Just because EA does "risk analysis for all sentient beings" doesn't mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it's far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about "you fail at your aims" rather than "you pose a risk to all of us," then my initial point still applies, that EA doesn't have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you "you fail at your aims."
Jason @ 2023-01-05T00:47 (+6)
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed -- in a causal sense -- to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as "bureaucracy," which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.
freedomandutility @ 2023-01-02T19:32 (+36)
I think ever since EA has become more of an “expected value maximisation” movement rather than a “doing good based on high quality evidence” movement, it has been quite plausible for EA activity overall, or community building specifically, to turn out to be net-negative in retrospect, but I think the expected value of community building remains extremely high.
I support more emphasis on thin EA and the development of a sort of rule of thumb for what a good ratio of meta spending vs object level impact spending would be.
Strongly agree that it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented. Frankly, I would guess the reason is that too many leadership EAs are overconfident in their decision making and are much too focused on “rowing” instead of “steering” in Holden Karnofsky’s terms.
“ Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.” Why do you think this, it is mostly intuition?
My view of other social movements is that they undervalue efforts to increase power which is why most are unsuccessful. I credit a lot of EA’s success in terms of object level impact to a healthy degree of focus on increasing power as a means to increasing impact.
RobertJMoore @ 2023-01-03T14:51 (+8)
“ Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.”
While I am unaware of any actual studies supporting it (indeed, the nature of the problem makes it rather resistant to study), that statement sounds like a rephrasing or redevelopment of what's sometimes known as Pournelle's Iron Law of Bureaucracy:
Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representative who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.
Your last line, if I'm understanding you correctly, is to suggest that this is a good thing because of the nature of those in the second category in EA. One can imagine situations where this would be the case, such as Plato's philosopher-kings worthy of trust.
Jan_Kulveit @ 2023-01-03T09:42 (+24)
Just wanted to flag that I personally believe
- most of Cremer's proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX
- it seems clear proposed reforms would not have prevented or influenced the FTX fiasco
- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as are willing to cooperate in being presented as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.
edit: present yourself as replaced with are willing to cooperate in being presented
howdoyousay? @ 2023-01-03T14:33 (+60)
I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she's taken for this has been disproportionate and sends the wrong signal to others about dissenting.
I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it's not the type that should be made lightly or without a lot of evidence that that is the person's agenda (bold for emphasis):
- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.
Lukas_Gloor @ 2023-01-03T17:58 (+37)
This discussion here made me curious, so I went to Zoe's twitter to check out what she's posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people's retweets saying Zoe "called it") is this tweet from last August:
EA is to me to be unwilling to implement institutional safeguards against fuck-ups. They mostly happily rely on a self-image of being particularly well-intentioned, intelligent, precautious. That’s not good enough for an institution that prizes itself in understanding tail-risk.
That seems legitimate to me. (We can debate whether institutional safeguards would have been the best action against FTX in particular, but the more general point of "EAs have a blind spot around tail risks due to an elated self-image of the movement" seems to have gotten a "+1" score with the FTX collapse (and EAs not having seen it coming despite some concerning signs).
There's also a tweet by a journalist that she retweeted:
3) Critics (eg @CarlaZoeC @LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized.
That particular wording sounds suspiciously like it was tailored to the events with hindsight, in which case retweeting without caveats is potentially slightly suboptimal. But knowing what we know now, I'd indeed be worried about a hypothetical world where FTX hadn't collapsed! Where Sam's vision of things and his attitude to risks gets to have such a huge degree of influence within EA. (That said, it's not like we can just wish money into existence from diversified sources of funding – in Zoe's document, I saw very little discussion of the costs of cutting down on "centralized" funding.)
In any case, I agree with Jan's point that it would be a massive overreaction to now consider all of Zoe's criticisms vindicated. In fact, without a more detailed analysis, I think it would even be premature to say that she got some important details exactly right (especially when it comes to suggestions for change).
Even so, I think it's important to concede that Zoe gets significant credit here at least directionally, and that's an argument for people to (re-)engage with her suggestions if they haven't already done so or if there's a chance they may have been a bit closed off to them the last time.
(My own view remains skeptical, though, as I explained here.)
Davidmanheim @ 2023-01-03T18:29 (+74)
3) Critics (eg @CarlaZoeC @LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized
I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it's crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.)
That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized.
MichaelStJules @ 2023-01-04T03:35 (+19)
This seems right within longtermism, but, AFAIK, the vast majority of FTX's grantmaking was longtermist. This decision to focus on longtermism seemed very centralized and might otherwise have shaped the direction and composition of EA disproportionately towards longtermism.
Chris Leong @ 2023-01-05T03:26 (+4)
If FTX's decentralised model had been proven successful for long-termism, I suspect it would have influenced the way funding was handled for other cause areas as well.
MichaelStJules @ 2023-01-05T08:12 (+3)
In case my wording was confusing, I meant that a community shift towards longtermism seems to have been decided by a small number of individuals (FTX founders). I'm not talking about centralization within causes, but centralization in deciding prioritization between causes.
Also, I'm skeptical that global health and poverty or animal welfare would shift towards very decentralized regranting without a massive increase in available funding first, because
- some of the large cost-effective charities that get funded are still funding-constrained, and so the bars to beat seem better defined, and
- there already are similar experiments on a smaller scale through the EA Funds.
Chris Leong @ 2023-01-05T08:29 (+2)
Yeah, I got that, I was just mentioning an effect that might have partially offset it.
I agree that a small number of individuals decided that the funds should focus on long-terminal, although this is partially offset by how the EA movement was shifting that direction anyway.
Davidmanheim @ 2023-01-04T11:13 (+4)
Yes, that seems correct.
Jan_Kulveit @ 2023-01-03T20:11 (+8)
I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help
- this Economist piece, mentioning Zoe about 19 times
- WP
- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
-this twitter thread
James Ozden @ 2023-01-03T22:18 (+16)
- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”"My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
To be fair, this seems like a reasonable statement on Zoe's part:
- If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.
- If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true.
Basically, as other comments have stated, you do little to actually say why these proposed reforms are, as you initially said, bad or would have no impact. I think if you're going to make a statement like:
"it seems clear proposed reforms would not have prevented or influenced the FTX fiasco"
You need to actually provide some evidence or reasoning for this, as clearly lots of people don't believe it's clear. Additionally, if it feels unfair to call Zoe "not epistemically virtuous" when you're making quite bold claims, without any reasoning laid out, then saying it would be too time-intensive to explain your thinking.
For example, you say here that you're concerned about what democratisation actually looks like, which is a fair point and useful object-level argument, but this seems more like a question of implementation rather than the actual idea is necessarily bad.
Jan_Kulveit @ 2023-01-04T00:18 (+35)
- If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.
Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?
Also: even if the possible whistleblowers inside of FTX were EAs, whistleblowing about fraud at FTX not directed toward authorities like SEC, but toward some EA org scheme, would have been particularly bad idea. The EA scheme would not be equipped to deal with this and would need to basically immediately forward it to authorities, leading to immediate FTX collapse. Main difference would be putting EAs in the centre of the happenings?
If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true.
I think the 'diversified our portfolio' frame is subtly misleading, because it's usually associated with investments or holdings, but here it is applied to 'donors'. You can't diversify donations the same way. Also: assume you recruit donors uniformly, no matter how wealthy they are. Most of the wealth will be with the wealthiest minority , basically because how the wealth distribution looks like. Attempt to diversify donation portfolio toward smaller donors ... would look like GWWC?
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
Jason @ 2023-01-04T04:07 (+25)
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept "unapproved" funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they want to take the money or not. I don't see how Cremer's proposal could be effective without a blacklist to enforce community will against anyone who chose to take the money anyway, and that gives whoever maintains the blacklist great power (which is contrary to Cremer's stated aims).
The reality, perhaps unfortunate, is that charities need donors more than donors need specific charities or movements.
Denkenberger @ 2023-01-06T22:56 (+6)
Also: assume you recruit donors uniformly, no matter how wealthy they are. Most of the wealth will be with the wealthiest minority , basically because how the wealth distribution looks like. Attempt to diversify donation portfolio toward smaller donors ... would look like GWWC?
It depends on how you define wealthiest minority, but if you mean billionaires, the majority of philanthropy is not from billionaires. EA has been unusually successful with billionaires. That means if EA mean reverts, perhaps by going mainstream, the majority of EA funding will not be from billionaires. CEA deprioritized GWWC for several years-I think if they had continued to prioritize it, funding would have gotten at least somewhat more diversified. Also, I find that talking with midcareer professionals it's much easier to mention donations rather than switching their career. So I think that more emphasis on donations from people of modest means could help EA diversify with respect to age.
Neel Nanda @ 2023-01-03T23:46 (+25)
If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.
Why do you believe this? To me, FTX fits more in the reference class of financial firms than EA orgs, and I don't see how EA whistleblower protections would have helped FTX employees whistleblow (I believe that most FTX employees were not EAs, for example). And it seems much more likely to me that an FTX employee would be able to whistle-blow than an EA at a non-FTX org.
Also, my current best guess is that only the top 4 at FTX/Alameda knew about the fraud, and I have not come across anyone who seems like they might have been a whistleblower (I'd love to be corrected on this though!)
Jan_Kulveit @ 2023-01-03T23:29 (+20)
I was reacting mostly to this part of the post
I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing
...
Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
I think it's fine for a comment to engage with just a part of the original post. Also, if a posts advocates for giving someone some substantial power, it seems fair to comment on media presence of the person.
Overall, to me, it seem you advocate for double-standard / selective demand for rigour.
Post-FTX discussion of Zoe's proposals seems mostly on the level 'Implement Carla Zoe Cremer’s Recommendations' or 'very annoyed this all had to happen before a rethink, given that 10 months earlier, I sat in his office proposing whistleblower protections, transparency over funding sources, bottom-up control over risky donations' or similar high level supportive comments, never going into details of the proposals, and without any realistic analysis of what would have happened. I expressed the opposite sentiment, clearly marking it as my belief.
Have you seen any actual in detail analysis how would the proposal influenced FTX? I did not. I'm sceptical of the helpfulness - for example, with whistleblower protections...
- Many EA orgs have whistleblower protection. Empirically, it seems it had zero impact on FTX, and the damage to the orgs seems independent of this.
- There are already laws and incentives for reporting wire fraud. If there was someone in the know from within FTX considering whistleblowing, if I understand SEC and CFTC comments, they would have been eligible for both protection and bounty in millions of dollars- and possibly avoided other bad things happening them, such as going to jail. Why would some EA bounty create stronger incentive?
- My impression is the original whistleblowing protection proposal was implicitly directed toward "EA charities", not "companies of EA funders".
But I think the amount of flack she's taken for this has been disproportionate and sends the wrong signal to others about dissenting.
Can you link to something specific? I haven't found any specific critical post or comment mentioning her on the forum since Nov.
In contrast, after a Google News search, I think the opposite is closer to reality: media coverage of Zoe's criticism is uncritically positive, and who is taking flack is MacAskill. While I'm sometimes critical of Will, the narrative that he is at fault for not implementing Zoe's proposals seems completely unfair to me.
mhendric @ 2023-01-03T11:31 (+26)
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
- Set up whistleblower protection schemes for members of EA organisations
- Transparent listing of funding sources on each website of each institution
- Detailed and comprehensive conflict of interest reporting in grant giving
Eli_Nathan @ 2023-01-03T14:21 (+7)
I'll note that many EA orgs already have whistleblower protection policies in place and that there are also various whistleblowing protection laws in many jurisdictions (including the US and the UK) which I assume any EA affiliated organization or employee would have to follow.
Jason @ 2023-01-03T14:58 (+15)
I can't speak to orgs, but the scope of legal protection for whistleblowing protection for US private employees is quite narrow -- I think people are calling for something much more robust. Also, I believe those protections often only cover an organization's actions against current employees -- not non-employer actions like blacklisting the whistleblower against receiving grants or trashing them to potential future employers.
Jan_Kulveit @ 2023-01-03T15:50 (+6)
Unfortunately not in detail - it's a lot of work to go through the whole list and comment on every proposal. My claim is not 'every item on the list is wrong', but 'the list is wrong on average' so commenting on three items does not solve possible disagreement.
To discuss something object-level, let's look at the first one
'Whistleblower protection schemes' sound like a good proposal on paper, but the devil is in detail:
1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law. The protection offered by law is probably stronger than an internal org policy for some cases, and does not apply in other cases. Also, in some countries there are regulations what whistleblower protections you should have in place - I assume orgs do follow it where it applies.
2. Many orgs where it makes sense have some policies/systems in this direction, but not necessarily under the name of 'whistleblower protection'.
3. Majority of EA orgs are orgs which are quite small. I don't think if you have a team of e.g. four people, having a whistleblower protection scheme works the same way as in org with four hundred people. In my view, what actually often makes more sense, is having external contacts for all sort of issues - e.g. the community health team.
4. Overall, I think often the worst situation is when you have a system which seemingly does something, but actually does not. For example: a campus mental health support system which is actually not qualified to help with mental health problems, but keeps track who reached out to them, is probably worse than nothing.
My bottom line is something like ... 'whistleblower protection scheme' may be good to implement in some cases, and some orgs have them. But it is too bureaucratic in other cases. Blanket policy requiring every org to have a formal scheme, no matter what the size or circumstances, seems bad.
mikko @ 2023-01-03T16:56 (+14)
The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities.
The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g. being fired by their employers in some situations.
The US SEC whistleblowing program on the other hand incentivizes whistleblowing by providing financial awards, some 10-30% of sanctions collected, for information that leads to significant findings. This policy, for the US, has a quickly estimated return of 5-10x through first order effects, and possibly many times that in second order effects through stopping fraud and reducing the expected value of fraud in general. The SEC gives several awards each month. A report about the program is available here for those interested.
Whistleblower protections tend to be more bureaucratic and are already covered by US and EU legislation to such an extent that improving them seems difficult. Whistleblower incentive mechanisms meanwhile seem much more worthwhile to investigate, because such a mechanism could be operated by a small centralized function without adding any new bureaucracy to existing organisations. I suspect that even a minimal whistleblower incentive* mechanism would reduce risks and increase trust within the EA diaspora by increasing the probability that we become aware of risky situations before they snowball into larger crises.
(*incentives here might not mean financial awards like in the SEC program, but something like helping the whistleblower find a new job, or taking the responsibility for investigating the information further instead of expecting the whistleblower to do it. I'd guess that most whistleblowing reports in EA, if any, would involve junior workers who are afraid of losing their income or status in the community, or simply do not have the energy, network, or skills to address the issue directly themselves.)
Jason @ 2023-01-03T15:13 (+21)
"It seems clear proposed reforms would not have prevented or influenced the FTX fiasco" doesn't really engage with the original poster's argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer's proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.
Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that's not really relevant to the discussion of the original post.
Jan_Kulveit @ 2023-01-03T17:15 (+4)
I'm not confident what the whole argument is.
In my reading, the OP updated toward the position "it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail" based on FTX causing large economic damage. One of the conjectures based on this is "Implement Carla Zoe Cremer’s Recommendations".
I'm mostly arguing against the position that 'the update of probability mass on EA community building being negative due to FTX evidence is a strong reason to implement Carla Zoe Cremer’s Recommendations'
For comparison: I held the position that effective altruist community-building activities could be net-negative in impact before FTX and did not update much on the FTX evidence. In my view, the main reason for plausible negativity is EA seems much better at "finding places of high leverage" where you can influence the trajectory of the world a lot, than in figuring out what to actually do in those places. In my view, interventions against the risk include emphasis on epistemics, pushing against local consequentialist reasoning, and pushing against free-floating "community building" where people not working on the object level try mostly to bring in a lot of new people.
Personally, I think implementing Zoe Cremer’s Recommendations as a whole either does not impact the largest real risks, or would make the negative outcomes more likely. Repeated themes in the recommendations are 'introduce bureaucracy' and 'decide democratically'. I don't think bureaucracies are wise, and in 'democratizing' things the big question is 'who is the demos?'.
Joel Becker @ 2023-01-02T20:47 (+18)
Thank you for this post. The framing of your points as conditional is especially helpful.
I strongly agree with lots here. As someone who has worked on community building-ish projects that are very far from or very close to frontline/object-level work, this part rang especially true:
Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger.
People interested in the claim might be interested in this related post and discussion.
justanusername @ 2023-01-03T19:32 (+16)
A milder statement of this is almost certainly already accepted by EA leadership and we should see the impact when the EA brownout ends.
A year ago, generating more SBFs was the brief argument for the high EV of community building. A common refrain: "SBF is contributing so much to EA causes, if what we're spending on community building generates even just one more SBF it will be worth it."
Now turn SBF to a negative value in that equation, or even merely a zero. The end result may be non-negative, but the EV of community building is greatly reduced.
Many in EA positions who have funded community-building orgs are probably now smarting at having mis-invested based on a false perception of SBF's value.
If there is a hard part, it will be convincing ourselves that although SBF was not high value, it will be hard to resist including hypothetical non-fraudulent SBFs in our EV calculations, as we have habituated to that way of thinking.
david_reinstein @ 2023-01-04T01:34 (+13)
I don’t see how this statement can be justified:
$8B in economic damage due to FTX’s fraud
8 billion in value was not destroyed. The net effect is mainly distributional. Financial markets are largely zero sum. Some investors lost a lot, others gained. If it hurt the price of crypto assets this means that overall, those who have assets other than crypto are marginally better off.
Of course the chaos causes some value to be lost, but not 8 billion.
Jason @ 2023-01-04T02:23 (+18)
If someone steals my car, is there is no "economic damage" because the thief is now better off to the extent of my loss? I would say I suffered economic damage and someone else got a benefit; the existence of that benefit does not negate the damage I incurred.
Larks @ 2023-01-04T03:06 (+28)
There is economic damage, but not necessarily equal to the headline number. It is reduced by netting against the gains to the thief, but increased by things like stress, required investments in security, disruption to plans, degraded incentives, and so on. In this case I would guess the economic damage is very large but still less than $8bn. In the case of a personal mugging I would guess the economic damage far exceeds the value of the contents of your wallet.
You might also reasonably object that the gains to the thief shouldn't count because they are illegitimate. However, in the FTX case many of the gains seem to have gone to other traders who profited without being guilty.
Habryka @ 2023-01-04T19:56 (+4)
I feel pretty confused by this and would love better estimates of the actual amount of money that was "lost" in the FTX situation. It seems plausible to me (though not likely) that it's above $8B, since a lot of people made plans conditional on FTX being legitimate in a way that now wiped out a lot of economic gains, and the long-term trust that was lost in the markets was worth more than $8B.
My best guess number for this is something in the $3-4B range, but that's really very much an ass-number.
david_reinstein @ 2023-01-04T13:13 (+1)
As usual, the best definition depends of the term on the use you want to make of it.
From a social welfare standpoint, if the thief values the car as much as you did, and he doesn’t spend resources covering up his crime, and you don’t incur an expense in filing police reports etc., there is no social loss.
I wouldn’t want to, e.g., count the FTX blowup as an 8 billion dollar loss in making cost effectiveness analysis comparisons to something like GiveDirectly.
Dean Abele @ 2023-01-10T08:32 (+1)
In general, I thought economic studies say the damage of fraud is much bigger than the distributional effect due to loss of trust, etc. I can try to find sources if anyone is interested.
SebastianSchmidt @ 2023-01-07T18:38 (+10)
I find it implausible that EA movement building is net-negative (<10%). However, I do appreciate the importance of not being unconditionally enthusiastic about movement-building as some specific forms may very well be net-negative. Some things I'd like to be aware of going forward:
1. Attempt to do things that reasonable non-EA entities will find valuable (e.g., by not being dependent on EA funders and collaborating more with non-EA actors).
2. Be very aware of who we put on a pedestal as promoters and social role models. E.g., I appreciate Macaskill in many ways and have been inspired by him but I think he's too emphasized as the EA leader/role model and would like to hear other voices better represented.
Misha_Yagudin @ 2023-01-07T18:47 (+4)
If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is negative. I honestly can't see how you can be very confident in the latter. Skrewing things up is easy; unintentionally messing up AI/LTF stuff seems easy and given high-stakes causing massive amounts of harm is an option (it's not an uncommon belief that FLI's Puerto Rico conferences turned out negatively, for example).
SebastianSchmidt @ 2023-01-07T19:10 (+4)
"If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is negative."I think you might mean something like "If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is definitely not negative."?.
I think it depends on how we operationalize community-building. I can definitely see how some forms of community-building is probably negative and I'd want for it to be high quality and relatively targetted.
What are some of the reasons why people think the Puerto Rico conference is negative?
Misha_Yagudin @ 2023-01-08T12:18 (+2)
The point was that there is a non-negligible probability that EA will end up negative.
SebastianSchmidt @ 2023-01-11T07:53 (+1)
Yes, I agree that there's a non-negligible P that this will happen and that some events will be very harmful (heavy-tailed). Currently, however, saying that it's >10% seems too high but I could definitely change my mind. But I'm sufficiently worried about this to be skeptical of broad and low-fidelity outreach and I solicit advice from people who are generally skeptical of all forms of movement-building to be sure that we're sufficiently circumspect in what we do.
Jack_S @ 2023-01-03T10:19 (+10)
I think I'm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
Karthik Tadepalli @ 2023-01-03T10:31 (+11)
I think the idea is that EA institutions look much worse after FTX but EA causes do not. SBF being a fraud may cause you to update about whether (e.g.) CEA is a good organization but should not cause you to update on bednets/AI.
Lukas_Gloor @ 2023-01-03T14:13 (+10)
Reading the first paragraph of the OP, here's me trying to excavate the argument:
- Just like positive impact is likely "heavy-tailed," so is negative impact (see also this paper)
- Introducing people to EA ideas increases their agentiness and "attempts to optimize"
- Sometimes when people try to optimize something, things go badly wrong (e.g., FTX)
- It's conceivable, therefore, that EA community building has net negative impact
I think the argument is incomplete. Other things to think about:
- Are there any reasons why it might be systematically easier to destroy value than to create it?
- Seems plausible.
- But: What's the alternative, what's the default trajectory without an EA "movement" of some sort?
- Doesn't seem like much value?
- Beware of false dichotomies: Instead of movement building vs. no movement building, are there ways to increase the robustness of movement building?
- E.g., not promoting individuals with a particular psychology who may be disproportionally likely to end up with outsized negative impact?
- Edit: worth saying that the OP does provide constructive suggestions!
Lukas_Gloor @ 2023-01-03T15:25 (+29)
After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via "institutional reforms" and "democratization."
I'm not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/weaponized (or even just sidestepped) by bad actors. Also, "democratization" sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem) and all the armchair quarterbacks who don't have a clue of what they're missing. There comes a point where you'll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people – see this comment by Jonas Vollmer).
Maybe I'm strawmanning the calls for reform and people who want governance safeguards mostly mean things that I would also agree with. I want to make clear that there are probably quite a few suggestions in the spirit of "institutional reforms" where I'd be in favor. It's not that I think all governance overhead is bad – e.g., I think boards are quite essential if the board members are actually engaged and committed to an org's mission. Also, I think EA orgs should give regular updates where the leadership communicates transparently the reasons why they did what they did, speaking real talk instead of putting up a PR front. (I think there's a lot of room for improvement here!)
*It's not like governance safeguards will magically change people who don't have the required qualities into competent good actors. I concede that there's a lot of truth to "power (without accountability) corrupts." So, one might argue "even good leaders may turn bad if they aren't accountable to anyone." However, that seems like a definitional dispute. As a "good leader," you'd be terribly scared of mission-drift and becoming corrupted, so you'd seek out a way to stay accountable to people whose judgment you respect. If you're not terribly scared of these things, or if you're the only person in the world whose judgment you respect, then you're not a good leader in the first place. Processes work well when they're designed by a founder (or CEO) who's highly committed to the org's mission and who has a vision. Stuff that's externally imposed on people rarely has its desired effects. If we want to reap the fruits of highly impactful organizations or institutions, we have to be prepared to give some founders or CEOs (the right ones!) a cushion of initial trust. (And then continue to watch them carefully so they don't use it all up and go negative.)
Jason @ 2023-01-04T01:51 (+22)
The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.
However, that is not necessarily the reformers' fault. In my view, it's not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.
If Cremer's proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, "the community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costs" is a much more convincing response from an optics perspective than blanket dismissals.
My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/elaboration) to the definitely-plausible-if-fleshed-out, so I think it's important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas I've seen described on the forum are better because they are less grand / more targeted + specific.
Jack_S @ 2023-01-03T17:14 (+3)
Yeah, makes sense. I just don't know why it's not just: "It's conceivable, therefore, that EA community building has net negative impact."
If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
Davidmanheim @ 2023-01-03T18:36 (+5)
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
joshcmorrison @ 2023-01-03T19:23 (+4)
I think I did a poor job of distinguishing what I call "institutional EA" (or "EA community building") from EA (or "EA as an idea"). But basically, there's a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/affiliate with effective altruists. The former is what I'm calling EA/idea of EA and the latter is community building.
As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.
PeterSlattery @ 2023-01-04T04:08 (+8)
[Giving myself 5 minutes to reply with a quick point - and failing!] Thank you for writing this. Here are some quick low confidence thoughts on the main argument you made.
I don't think I understand why you attribute any issues from FTX to community building specifically. The FTX outcome was a convergence of many factors, and movement building doesn't obviously seem to be the most important. So many other EA adjunct practices like philosophising, overconfidence, prioritisation or promoting earning to give could be similarly implicated etc.
I agree that community building can be net negative in some cases, but I think it is <1% likely it has been negative overall. I personally worry that posts like this could push us in the wrong direction. From my perspective, we have never sufficiently prioritised doing and testing our community building.
Without past EA movement building, most bad things that involved EA in this timeline would mostly still have happened in different timelines. There would very likely still have been cryptocurrency related frauds and failures and charismatic and smart people like SBF who surprised everyone who trusted them. However, many good things would not have happened, e.g., refer to posts/content about what EA has achieved. There would be ~<1% as many people working on or with EA concepts/ideas and that seems like a very bad thing.
So in general, I think the solution to what happened with FTX is about getting better processes for dealing with people when in the EA community, more so than trying to slow/change community building in any significant way. I appreciate your suggestions and agree with some, but not all. Unfortunately, I don't have time to engage.
joshcmorrison @ 2023-01-05T16:08 (+2)
Thanks for this comment! My argument about community building's particular role is that I think there were certain "community building" efforts specifically that caused the existence of FTX. The founder was urged to work in finance rather than on animal welfare, and then worked at CEA prior to launching Alameda. Alameda/FTX were seen as strategies to expand the amount of funding available to effective altruist causes and were founded and run by a leadership team that identified as effective altruist (including the former CEO of the Center for Effective Altruism). The initial funding was from major EA donors. To me the weight of public evidence really points to Alameda as having been incubated by Center for Effective Altruism in a fairly clear way.
It's possible that in the absence of Alameda/FTX's existence its niche would have been filled by another entity that would have done similarly bad things, but it seems hard for me to imagine that without institutional EA's backing FTX would have existed.
PeterSlattery @ 2023-01-06T03:30 (+8)
Thanks for explaining, Josh! I understand your position a little better, but I still don't agree that it makes sense to \weight the impact of movement building on this outcome more heavily than all the other EA related (and unrelated) inputs involved, and accordingly, I am still relatively unconvinced that we need to react to the event by significantly changing our perspective on the value of movement building.
Having said that, I still agree with you that we should be careful with movement building, expect and mitigate downside risks, and keeping evaluating it and trying to do it better.
Just as an FYI - I probably won't respond to any more comments because of time constraints.
Jason @ 2023-01-03T00:26 (+8)
Could you say more about the possibility of "external" funders for EA community building? It's probably not realistic to get major funding from a Big-Name Generalist Foundation, given that many of EA's core ideas inevitably constitute a severe criticism of how Big Philantrophy works. And it would be otherwise hard to decide who an "external" funder was -- in my book, "gives lots of money to EA community building" is pretty diagnostic for being an EA and thus not external.
One possibility might be that major funders would only pick up (say) 50% of the tab for most projects/organizations, and that would be conditioned as a match on what the project/organization could attract from small/medium donors (SMDs). I suggest that rank-and-file EAs may be collectively more able to discern whether it is worth pulling money away from direct work to fund meta work because they are closer to the outputs. I suspect that significant SMD buy-in would also practically enforce some of what you're describing -- it gives more people a practical "vote" and the ability to throw a flag if things are starting to go off the rails.
Of course, requiring meta organizations/projects to significantly rely on SMDs has its costs too. But raising money from SMDs is less efficient for any organization than getting the bulk of funding needs from a few big fish. So the question is which organizations should get a greater percentage of their funding from big donors. In contrast to meta organizations, I don't think there is any significant benefit from having most direct-work organizations more reliant on SMD funding.
Davidmanheim @ 2023-01-03T18:34 (+3)
To answer this, from my perspective, I'll quote from my post a few months back:
First, I think that we should expect communities to be self-supporting, outside of donor dollars. Having work spaces and similar is great, but it’s not an impartially altruistic act to give yourself a community. It’s much too easy to view self-interested “community building” as actually altruistic work, and a firewall would be helpful.
Given that, I strongly think that most EAs would be better off giving their 10% to effective charities focused on the actual issues, and then paying dues or voluntarily contributing other, non-EA-designated funds for community building. That seems healthier for the community, and as a side-benefit, removes the current centralized “control” of EA communities, which are dependent on CEA or other groups.
Jason @ 2023-01-03T19:45 (+4)
Thanks, David. I think the best approach is probably more complicated than my 10,000 foot comment -- "work spaces and similar" are in a different category to me than EAGs, which are in turn in a different category than funding early EA community-building work in middle-income countries. The appropriate "coinsurance" will vary depending on the specific project, but I think you're right that it may be 100 percent for some of them.
Davidmanheim @ 2023-01-04T11:18 (+3)
Strongly agree - and if Dustin Moskowitz or Jann Tallinn wants to fund early groups in universities or in developing countries, that seems like a great place to give part of the far-more-than-10%. (But I'd still like it more if that giving wasn't called or considered EA donations.)