Rethink Priorities’ Leadership Statement on the FTX situation
By abrahamrowe, Peter Wildeford, Marcus_A_Davis @ 2022-11-23T22:54 (+266)
From the Executive Team and Board of Directors of Rethink Priorities (Peter Wildeford, Marcus Davis, Abraham Rowe, Kieran Greig, David Moss, Ozzie Gooen, Cameron Meyer Shorb, and Vicky Bond).
We were saddened and shocked to learn about the extremely serious alleged misdeeds and misconduct of Sam Bankman-Fried and FTX. While we are still trying to understand what happened and the consequences of these events, we are dismayed that customer funds may have been used improperly, and that, currently, many customers are unable to retrieve funds held by FTX. We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX. The actions that Bankman-Fried and FTX have been accused of are out of line with the values that we believe in and try to represent as an organization.
At this time, Rethink Priorities remains in a stable financial and legal position. We do not plan on laying off staff or cutting salaries in response to these events or to the changed financial condition of the EA space. However, the strategies of our General Longtermism, Special Projects, and Surveys teams were partly based on the existence of FTX funding for Rethink Priorities and others in the EA community. For the time being, we've mainly paused further hiring for these programs and are revisiting our strategies for them going forward. We’ve decided that hiring for our Special Projects team, which was already in progress before we learned about the FTX situation, will proceed in order to evaluate and onboard new fiscal sponsees.
Unfortunately, this situation does impact our long-term financial outlook and our ability to keep growing. Rethink Priorities continues to have large funding needs and we look forward to sharing more about our plans with the community in the next few days. We will need to address the funding gap left by these changed conditions for the coming years.
In terms of legal exposure, Rethink Priorities’ legal counsel are looking into the possibility of clawbacks of funds previously donated to us by FTX-related sources. At this time, we are not aware of any other significant legal exposure for Rethink Priorities or its staff.
Prior to the news breaking this month, we already had procedures in place intended to mitigate potential financial risks from relying on FTX or other cryptocurrency donors. Internally, we've always had a practice of treating pledged or anticipated cryptocurrency donations as less reliable than other types of donations for fundraising forecasting purposes, simply due to volatility in that sector. As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available. We did this exercise due to the relative size and importance of the funding to us, and the base failure rates of cryptocurrency projects, not due to having non-public information about FTX or Bankman-Fried.
In hindsight, we believe we could have done more to share these internal risk assessments with the rest of the EA community. Going forward, we are reevaluating our own approach to risk management and the assessment of donors, though we do not believe any changes we will make would have caught this specific issue.
As mentioned above, Rethink Priorities is receiving legal advice on clawbacks, and we are happy to share resources with other organizations that are concerned about their exposure. We cannot provide legal advice, but we are able to provide information on our own response—please reach out to Abraham Rowe (abraham@rethinkpriorities.org) for more information.
James Ozden @ 2022-11-23T23:57 (+145)
Quite off-topic but I think it's quite remarkable that RP does crisis management and simulation exercises like this! I'm glad that RP is stable financially and legally (at least in the short-term), and put a significant chunk of that down to your collective excellent leadership.
Habryka @ 2022-11-24T05:59 (+69)
I appreciate some of the generator behind this post, but also have hesitations about sentences like this:
We unequivocally and in the strongest possible terms condemn any potential fraud or misuse of customer funds and trust that occurred at FTX.
I think a really core part of Effective Altruism is the recognition that being a good person is hard and messy. Figuring out what moral principles to follow is complicated and requires extensive analysis and thinking. When I hear prominent EAs, people that I have often personally discussed the ethics of lying to nazi officials when hiding jews in your closet, or the tradeoffs of cooperating with corrupt government regimes in foreign countries, say that they now suddenly "unequivocally condemn all fraud", I feel gaslit and confused by what is happening.
These are not sentences that anyone I worked with in EA believed a month ago, and I don't think that you believe them now. Yes, almost all fraud is bad. But it's not perfectly clear cut, as is the case with almost all clear lines one might want to draw in the domain of ethics. The central cases of fraud at FTX that we know about seem very likely to have been quite bad, and we should almost certainly condemn them. But it looks hypocritical and actively harmful to our ability to update and learn if we now write posts like this that frame the law as some kind of perfect guide to ethical behavior, when I think a month ago nobody would have said something as strong as this without being immediately shut down for overgeneralizing.
As people go, I am probably more strongly opposed to fraud than the vast majority of people in the EA community. I am an extreme stickler for honesty and openness and truth, and I am very willing to condemn and punish dishonest and deceptive behavior. But I think in order to actually do this requires distinguishing between the magnitude of badness of different behaviors, and it requires recognizing that we do not know, and might never know, what all the things are that happened at FTX, and as such making sure that we are precise in what we condemn.
I just don't know how you would know enough detail to say this, and it feels to me like statements like this are being used for rhetorical purposes in your announcement, without much care for their actual literal meaning.
All kinds of small things can count as fraud. I can easily imagine a world where regulation literally puts a crypto exchange into an inescapable catch-22 where it has to commit some minor fraud as the least bad of the options available (I remember similar situations where doctors during COVID where both legally required to not throw any vaccine doses away at the end of the day, and also weren't allowed to give anyone who hadn't registered a vaccine dose), and if someone had handled that ethically and tried to minimize the damage, then I definitely wouldn't condemn it.
There are also just lots of employees at FTX, and many of them seemed really well-intentioned, and if one of them had accidentally done something fraud-like, then I also wouldn't condemn that "in the strongest possible terms" (separately, clearly there are other things that you would condemn more, like intentionally causing a nuclear war, or developing a bioweapon with omnicidal intentions, so "strongest" also feels ungrounded here). It seems bad, but like, I do want to reserve my "in the strongest possible terms" condemnations for things that are actually really bad.
And then there are also lots of scenarios that we can't rule out, even if any one of them doesn't seem that likely. It's plausible that at some point the family of someone at FTX was literally threatened with murder (not unreasonable given the amount of money involved), and that fraud was somehow the most humane way out of that (maybe by pretending to be working with the U.S. or bahamas government). I feel like our whole job as a community dedicated to effective ethics is to maintain openness that these kinds of things could happen, and were potentially justified.
Just to be clear, we know that there were very likely a number of instances of highly immoral and reckless fraud happening at FTX, and I think it totally makes sense to condemn those, but I think it really pays off to be specific in situations like this, and not speak in generalities that stretch beyond what we actually know.
I understand that in situations like this it's tempting to use strong rhetoric in order to express strong feelings, but I actually think communicating accurately and in the right measures is more important in situations like this than in almost all other situations.
Given that I have been encouraging lots of people to write more about the FTX situation, I want to clarify that I have a dispreference for posts like this. I don't think they are terrible, but the kind of writing that I am interested in is people sharing observations and hypotheses and trying to do collective sense-making, and not public statements like this, which seem to only communicate information that helps people orient incidentally to their more social-reality based core content.
I still appreciate you writing anything at all, and do think there is useful information in this post. I do also wish it wasn't dressed up as much in statements of conviction (which I genuinely don't think are the right thing to do at this point, and I think the right thing is to express curiosity and confusion and open-mindedness about what the lessons to take away from this whole mess are).
Linch @ 2022-11-30T11:07 (+45)
When I hear prominent EAs, people that I have often personally discussed the ethics of lying to nazi officials when hiding jews in your closet, or the tradeoffs of cooperating with corrupt government regimes in foreign countries, say that they now suddenly "unequivocally condemn all fraud", I feel gaslit and confused by what is happening.
These are not sentences that anyone I worked with in EA believed a month ago, and I don't think that you believe them now.
Hi Oli. I talked to a friend and they pointed out that many people reading your comment may reasonably read you as saying a) there's a direct quote where RP leadership in the post above "unequivocally condemn all fraud" (including fraud relevant to hiding jews from nazi officials) and b) that you've "often personally discussed" ethics with RP leadership.
a) is clearly false, as the post above only refers to condemning "any potential fraud or misuse of customer funds and trust that occurred at FTX," and does not refer to any broad condemnation for fraud or fraud-like activities relevant to hiding jews from nazis or cooperating with corrupt government regimes. I'm guessing b) is false as well based on my personal understanding of how often you interact with RP leadership.
I can see how this is an honest misunderstanding, however, it'll be helpful to be very clear that in the statement above, RP leaders gave no literal indication one way or another about their position on e.g. lying to Nazis.
Habryka @ 2022-11-30T18:55 (+5)
Makes sense! I do think it makes some, though not a huge difference that it's specifying that it's just about FTX. I think it's of course a weaker statement, but I also don't really think it changes much of the basic argument (and I think it would be quite bad if we changed our ethical standards for behavior this much based on the context in which it was committed, e.g. to think it's OK for other people to lie to corrupt regimes, but not people at FTX).
I have discussed ethics with people in RP leadership probably a decent amount over the years (few other topics that come up as frequently in EA contexts), so I think b) is accurate, though there are no hard lines here, and I have talked very little to most of the leadership, so still seems good to clarify.
For example I've talked a pretty decent amount with Ozzie over the years, including about honesty norms, and he is on the RP board (I think not fully clear whether you should count board members as part of leadership).
and does not refer to any broad condemnation for fraud or fraud-like activities relevant to hiding jews from nazis or cooperating with corrupt government regimes.
I do think FTX is pretty close to the top of the list where I can imagine them having run into having to interface with a bunch of corrupt government regimes. The Bahamian government really did not strike me as a very lawful and non-corrupt institution, and they had international operations in dozens of different countries, often interfacing with areas of legal ambiguity where I expect corruption to come through more strongly than in other areas.
I of course still agree that saying "at FTX" limits the type of fraud that is being condemned to fraud that might have occurred at FTX, but I think we at the moment have such large uncertainty about what actually happened at FTX that I don't this actually rules out that many types of fraud (I do agree it rules out specifically lying to nazis, since there are no nazi governments during the period where FTX existed, though it might does not rule out lying e.g. to the Chinese government, which indeed might actually be quite likely to have occurred given that they were stationed in Hong Kong and strikes me in many ways as a highly analogous situation).
Habryka @ 2022-11-30T20:29 (+2)
Also, I just reread my top-level comment. It seems quite clear about the "at FTX" part? Like, there are like 4 paragraphs of examples and argument I give, all of which are pretty obviously set in the context of FTX. I still care about not being misunderstood, but I would currently be surprised if someone actually walked away thinking that the OP, or my comment, was not fully aware that the original statement said "at FTX".
Linch @ 2022-11-30T20:46 (+4)
To readers: Agree with my comment if before reading this exchange, you previously thought RP leadership said literally that they "unequivocally condemn all fraud."
(Note that you should agree if you knew in context they didn't mean it completely, but just that you thought this was literally said).
ChanaMessinger @ 2022-11-24T14:48 (+39)
I, speaking for myself, really appreciate the calls for specificity and precision when we make moral judgments and assessments, in general and especially in this moment.
Jason @ 2022-11-28T14:44 (+32)
RP's statement is inherently a Public Media Statement, and the portions condemning certain events at FTX should be evaluated with that in mind. I think it is quite clear that RP was not intending to make a statement about hypothetical "minor" events at FTX that might be technically fraud, or that most readers would understand them as having done so.
To be clear, Public Media Statements should be as accurate as possible -- but the medium informs the message and limits the writer's ability to convey nuance, full depth of meaning, and high degrees of accuracy. The same is true of explaining things to a five-year old; trying to cram in high degrees of technical accuracy or nuance can often interfere with the goals of the communication and make it less accurate.
For instance, I suggest that one of the rules of Public Media Statements is that media outlets will quote a small portion of them, and that quote will be the main way the statement is experienced by readers. So each sentence that a journalist could reasonably quote needs to independently be a reasonably good representation of the organization's opinion. Or the journalist may craft a short summary of the statement -- so the writer needs to place a premium on making sure a reader does not get the wrong impression. And all of this needs to be done in a compact manner. These features significantly limit the range and accuracy of what can be reasonably conveyed via Public Media Statement. So a statement designed for maximum accuracy attainable under the framework of Public Media Statements may not be viewed as maximally accurate when evaluated under the norms for EA Forum Posts.
Furthermore, it is important to consider the extent to which a Public Media Statement accurately conveys the affective stance of the writer in addition to their cognitive stance. "We disapprove of any events at FTX that were fraudulent and caused depositors to lose their money" is cognitively safe -- but its mildness is probably inaccuate in conveying how RP feels about those events. Again, this is limited by the nature of Public Media Statements -- it isn't viable to set up a 10-point scale of ethical lapses and then explain how strongly one feels about the known FTX ethical lapses by reference to that scale and why.
Again, I appreciate the focus on being as accurate as possible -- I just think we have to consider a statement's accuracy in light of the limitations of the medium in which it was expressed, and that attempting to add too much nuance/precision can cause us to less accurately convey the intended meanings in some cases.
SiebeRozendal @ 2022-11-24T13:56 (+26)
There's a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn't it.
I agree that the more inquisitive posts are more interesting, but the goal of this post is clearly not meant to reflect deeply on what to learn from the situation. It's RP giving an update/statement that's legally robust and shares the most important details relevant to RP's functioning
RavenclawPrefect @ 2022-11-24T18:44 (+29)
I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is "never (except maybe April Fools)".
The change I'd like to see in this post isn't a five-paragraph footnote on morality, just the replacement of a sentence that I don't think they actually believe with one they do. I think that environments where it is considered a faux pas to point out "actually, I don't think you can have a justified belief in the thing you said" are extremely corrosive to the epistemics of a community hosting those environments, and it's worth pushing back on them pretty strongly.
SiebeRozendal @ 2022-11-24T20:58 (+13)
Also note that their statement included "...that occurred at FTX". So not any potential fraud anywhere.
SiebeRozendal @ 2022-11-24T20:55 (+3)
Ah I didn't mean to apply Habryka's comment to be faux pas. That's awkward phrasing of mine. I just meant to say that the points he raises feel irrelevant to this post and its context.
Habryka @ 2022-11-24T19:42 (+24)
There's a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn't it.
Wait, if now isn't the time to be specific about what actions we actually condemn and what actual ethical lines to draw, when is it? Clearly one of the primary things that this post is trying to communicate is that Rethink condemns certain actions at FTX. It seems extremely important (and highly deceptive to do otherwise) to be accurate in what it condemns, and in what ways.
Like, let's look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to? Many people right now are looking for reassurance about where the actual ethical lines are that EA is drawing. Trying to reassure those people seems like one of the primary goals of this post.
But this post appears to me be basically deceptive about where those lines are, or massively premature in its conviction on where to make commitments for the future (like, I think it would both have quite bad consequences to defend an individual who had committed ethically justifiable fraud, and also a mistake to later on condemn that individual because I guess RP has now committed to a stance of being against all fraud, independently of its circumstances, with this post written as is).
I think one of the primary functions of this post is to reassure readers about what kind of behavior we consider acceptable and what kind of behavior we do not consider acceptable. Being inaccurate or deceptive about that line is a big deal. I think indeed being accurate about those lines is probably the most important component of posts like this, and the component that will have the longest-ranging consequences.
There is of-course an easy way out, which is to just express uncertainty in where the ethical lines are, or to just not make extremely strong statements about where the lines are in the first place that you don't believe. I think we are still learning about what happened. Holden's post and ARC's posts for example do not strike me as overstepping what they believe or know.
Linch @ 2022-11-24T20:17 (+17)
Many people right now are looking for reassurance about where the actual ethical lines are that EA is drawing. Trying to reassure those people seems like one of the primary goals of this post.
(speaking for myself. Was not involved in drafting the post, though I read an earlier version of it) FWIW this is very much not how I read the post, which is more like "organizational updates in light of FTX crashing." RP's financial position, legal position, approach to risk, and future hiring plans all seem to be relevant here, at least for current and future collaborators, funders, and employees. They also take up more lines than the paragraph you focused on, and carry more information than discussions about EA ethical lines, which are quite plentiful in the forum and elsewhere.
Habryka @ 2022-11-24T20:35 (+22)
That's possible! My guess is most readers are more interested in the condemnation part though, given the overwhelming support that posts like this have received, which have basically no content besides condemnation (and IMO with even bigger problems on being inaccurate about where to draw ethical lines).
It is plausible that RP primarily aimed to just give an organizational update, though I do think de-facto the condemnation part will just end up being more important and have a greater effect on the world and will be referred back to more frequently than the other stuff, so there might just be a genuine mismatch between the primary goals that RP has with this post, and where the majority of the effect of this post will come from.
evhub @ 2022-11-28T22:13 (+10)
My guess is most readers are more interested in the condemnation part though, given the overwhelming support that posts like this have received, which have basically no content besides condemnation (and IMO with even bigger problems on being inaccurate about where to draw ethical lines).
I think my post is quite clear about what sort of fraud I am talking about. If you look at the reasons that I give in my post for why fraud is wrong, they clearly don't apply to any of examples of justifiable lying that you've provided here (lying to Nazis, doing the least fraudulent thing in a catch-22, lying by accident, etc.).
In particular, if we take the lying to Nazis example and see what the reasons I provide say:
When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.
This clearly doesn't apply to lying to Nazis, since it's not a situation where money and power are being seized for oneself.
Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.
I think the fact that you would lie to a Nazi makes you more trustworthy for coordination and cooperation, not less.
Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.
And in the case of lying to Nazis, the consequences are clearly positive.
Habryka @ 2022-11-28T22:33 (+1)
I am working on a longer response to your post, so not going to reply to you here in much depth.
Responding to this specific comment:
I don't think your line of argumentation here makes much sense (making very broad statements like "Fraud in the service of Effective Altruism is unacceptable" but then saying "well, but of course only the kind of fraud for which I gave specific counterarguments"). Your post did not indicate that it was talking about any narrower definition of fraud, and I am confident (based on multiple conversations I've had about it with) that it was being read by other readers as arguing for a broad definition of fraud. If you actually think it should only apply to a narrower definition of fraud, then I think you should add a disclaimer to the top explaining what kind of fraud you are talking about, or change the title.
evhub @ 2022-11-28T22:46 (+12)
I think you're wrong about how most people would interpret the post. I predict that if readers were polled on whether or not the post agreed with “lying to Nazis is wrong” the results would be heavily in favor of “no, the post does not agree with that.” If you actually had a poll that showed the opposite I would definitely update.
Habryka @ 2022-11-28T23:32 (+10)
I think the nazi example is too loaded for various reasons (and triggers people's "well, this is clearly some kind of thought experiment" sensors).
I think there are a number of other examples that I have listed in the comments to this post that I think would show this. E.g. something in the space of "jewish person lies about their religious affiliation in order to escape discrimination that's unfair to them for something like scholarship money, of which they then donate a portion (partially because they do want to offset the harm that came from being dishonest)", is I think a better experiment here.
I think people would interpret your post being pretty clearly and strongly against, in a way that doesn't seem very justified to me (my model of whether this is OK is pretty context-dependent, to be clear).
evhub @ 2022-11-28T22:55 (+4)
Adding on to my other reply: from my perspective, I think that if I say “category A is bad because X, Y, Z” and you're like “but edge case B!” and edge case B doesn't satisfy X, Y, or Z, then clearly I'm not including it in category A.
Habryka @ 2022-11-28T23:28 (+2)
That sounds like a fully generalized defense against all counterarguments, and I don't think is how discourse usually works. If you say "proposition A is true about category B, for reasons X, Y, Z" and someone else is like "but here is an argument C for why proposition A is not true about category B", then of course you don't get to be like, "oh, well, I of course meant the subset of category B where argument C doesn't hold".
If I say "being honest is bad because sometimes people use true information against you" and you say "but sometimes they won't though and actually use it to help you", then I can't say "well, of course I didn't include that case when I was talking about 'being honest', I was just talking about being honest to people who don't care about you".
Or less abstractly, when you argue that giving money to GiveWell is good because money donated there can go much farther than otherwise, and then GiveWell turns out to have defrauded the money, then you don't get to be like "oh, well, of course, in that case giving money to GiveWell was bad, and I meant to exclude the case where GiveWell was defrauding money, so my original post is still correct".
evhub @ 2022-11-29T00:09 (+4)
That sounds like a fully generalized defense against all counterarguments, and I don't think is how discourse usually works.
It's clearly not fully general because it only applies to excluding edge cases that don't satisfy the reasons I explicitly state in the post.
If you say "proposition A is true about category B, for reasons X, Y, Z" and someone else is like "but here is an argument C for why proposition A is not true about category B", then of course you don't get to be like, "oh, well, I of course meant the subset of category B where argument C doesn't hold".
Sure, but that's not what happened. There are some pretty big disanalogies between the scenarios you're describing and what actually happened:
- The question is about what activities belong to the vague, poorly defined category of “fraud,” not about the truth of some clearly stated “proposition A.” When someone says “category A has property X,” for any vague category A—which is basically all categories of things—there will always be edge cases where it's not clear.
- You're not presenting some new “argument C” for why fraud is good actually. You're just saying there are edge cases where my arguments don't apply. Which is obviously correct! But just because there are always edge cases for all categories—it's effectively just an objection to the use of categories at all.
- Furthermore, in this case, I pretty clearly laid out exactly why I thought fraud was bad. Which gives you a lot of evidence to figure out what class of things I was centrally pointing to when using “fraud” as a general category, and it's pretty clear based on those reasons that the examples you're providing don't fit into that category.
Habryka @ 2022-11-29T01:06 (+2)
The question is about what activities belong to the vague, poorly defined category of “fraud,” not about the truth of some clearly stated “proposition A.” When someone says “category A has property X,” for any vague category A—which is basically all categories of things—there will always be edge cases where it's not clear.
I mean, indeed the combination of "fraud is a vague, poorly defined category" together with a strong condemnation of said "fraud", without much explicit guidance on what kind of thing you are talking about, is what I am objecting to in your post (among some other things, but again, seems better to leave that up to my more thorough response).
I think you are vastly overestimating how transparent the boundaries of the fraud concept are that you are trying to point to. Like, I don't know whether you meant to include half of the examples I listed on this thread, and I don't think other readers of your post do. Nevertheless you called for strong condemnation of that ill-defined category.
I think the average reader of your post will leave with a feeling that they are supposed to be backing up some kind of clear line, because that's the language that your post is written in. But there is no clear line, and your post does not actually meaningfully commit us to anything, or should serve as any kind of clear sign to the external world about where our ethical lines are.
Of course we oppose fraud of the type that Sam committed, that fraud exploded violently and was also incredibly reckless and was likely even net-negative by Sam's own goals, but that's obvious and not an interesting statement and is not actually what your post is primarily saying (indeed, it is saying that we should condemn fraud independently of the details of the FTX case, whatever that means).
I think what we owe the world is both reflection about where our actual lines are (and how the ones that we did indeed have might have contributed to this situation), as well as honest and precise statements about what kinds of things we might actually consider doing in the future. I don't think your post is helping with either, but instead feels to me like an inwards-directed applause light for "fraud bad", in a way that does not give people who have genuine concerns about where our moral lines are (which includes me) much comfort or reassurance.
evhub @ 2022-11-29T01:26 (+4)
I mean, indeed the combination of "fraud is a vague, poorly defined category" together with a strong condemnation of said "fraud", without much explicit guidance on what kind of thing you are talking about, is what I am objecting to in your post.
I guess I don't really think this is a problem. We're perfectly comfortable with statements like “murder is wrong” while also understanding that “but killing Hitler would be okay.” I don't mean to say that talking about the edge cases isn't ever helpful—in fact, I think it can be quite useful to try to be clear about what's happening on the edges in certain cases, since it can sometimes be quite relevant. But I don't see that as a reason to object to someone saying “murder is wrong.”
To be clear, if your criticism is “the post doesn't say much beyond the obvious,” I think that's basically correct—it was a short post and wasn't intended to accomplish much more than basic common knowledge building around this sort of fraud being bad even when done with ostensibly altruistic motivations. And I agree that further posts discussing more clearly how to think about various edge cases would be a valuable contribution to the ongoing discussion (though I don't personally plan to write such a post because I think I have more valuable things to do with my time).
However, if your criticism is “your post says edge case B is bad but edge case B is actually good,” I think that's a pretty silly criticism that seems like it just doesn't really understand or engage with the inherent fuzziness of conceptual categories.
evhub @ 2022-11-29T01:30 (+2)
think what we owe the world is both reflection about where our actual lines are (and how the ones that we did indeed have might have contributed to this situation), as well as honest and precise statements about what kinds of things we might actually consider doing in the future.
I actually state in the post that I agree with this. From my post:
In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong?
Perhaps that is not as clear as you would like, but like I said it was a short post. And that sentence is pretty clearly saying that I think it's worthwhile for us to try to carefully confront the moral question of what is okay and what is not—which the post then attempts to start the discussion on by providing some of what I think.
Habryka @ 2022-11-29T01:44 (+4)
I do think your post is making actually answering that question as a community harder, because you yourself answer that question with "we unequivocally need to condemn this behavior" in a form that implies strong moral censure to anyone who argues the opposite.
You also said that we should do so independently of the facts of the FTX case, which feels weird to me, because I sure think the details of the case are very relevant to what ethical lines I want to draw in the future.
The section you quote here reads to me as a rhetorical question. You say "carefully", but you just answer the question yourself in the next sentence and say that the answer "clearly" is the way you say it is. I don't think your post invites discussion or discourse about where the lines of fraud are, or when we do think deception is acceptable, or generally reflecting on our moral principles.
evhub @ 2022-11-29T01:57 (+2)
in a form that implies strong moral censure to anyone who argues the opposite
I don't think this and didn't say it. If you have any quotes from the post that you think say this, I'd be happy to edit it to be more clear, but from my perspective it feels like you're inventing a straw man to be mad at rather than actually engaging with what I said.
You also said that we should do so independently of the facts of the FTX case, which feels weird to me, because I sure think the details of the case are very relevant to what ethical lines I want to draw in the future.
I think that, for the most part, you should be drawing your ethical boundaries in a way that is logically prior to learning about these sorts of facts. Otherwise it's very hard to cooperate with you, for example.
The section you quote here reads to me as a rhetorical question.
It isn't intended as a rhetorical question. I am being quite sincere there, though rereading it, I see how you could be confused. I just edited that section to the following:
In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? This is a thorny moral question that is worth nuanced discussion, and I don't claim to have all the answers.
Nevertheless, I think fraud in the service of effective altruism is basically unacceptable—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.
Rohin Shah @ 2022-11-25T07:37 (+8)
Like, let's look ahead a few months. Some lower-level FTX employee is accused of having committed some minor fraud with good ethical justification that actually looks reasonable according to RP leadership, so they make a statement coming out in defense of that person.
Do you not expect this to create strong feelings of betrayal in previous readers of this post, and a strong feeling of having been lied to?
I broadly agree with your comments otherwise, but in fact in this hypothetical I expect most readers of this post would not feel betrayed or lied to. It's really uncommon for people to interpret words literally; I think the standard interpretation of the condemnation part of this post will be something along the lines of "stealing $8b from customers is bad" rather than the literal thing that was written. (Or at least that'll be the standard interpretation for people who haven't read the comments.)
The negative consequence I'd point to is that you lose the ability to convey information in cases where it matters. If Rethink says "X is bad, we should do something about it" I'm more likely to ignore it than if you said it.
Habryka @ 2022-11-25T08:04 (+10)
Yeah, sorry, I think you are right that as phrased this is incorrect. I think my phrasing implies I am talking about the average or median reader, who I don't expect to react in this way.
Across EA, I do expect reactions to be pretty split. I do expect many of the most engaged EAs to have taken statements like this pretty literally and to feel quite betrayed (while I also think that in-general the vast majority of people will have interpreted the statements as being more about mood-affiliation and to have not really been intended to convey information).
I do think that at least for me, and many people I know, my engagement with EA is pretty conditional on exactly the ability for people in EA to make ethical statements and actually mean them, in the sense of being interested in following through with the consequences of those statements, and to try to make many different ethical statements consistent, and losing that ability I do think would lose a lot of what makes EA valuable, at least for me and many people I know.
Rohin Shah @ 2022-11-25T09:10 (+12)
Fwiw I'd also say that most of "the most engaged EAs" would not feel betrayed or lied to (for the same reasons), though I would be more uncertain about that. Mostly I'm predicting that there's pretty strong selection bias in the people you're thinking of and you'd have to really precisely pin them down (e.g. maybe something like "rationalist-adjacent highly engaged EAs who have spent a long time thinking meta-honesty and glomarization") before it would become true that a majority of them would feel betrayed or lied to.
Habryka @ 2022-11-25T18:55 (+3)
That's plausible, though I do think I would take a bet here if we could somehow operationalize it. I do think I have to adjust for a bunch of selection effects in my thinking, and so am not super confident here, but still a bit above 50%.
RobBensinger @ 2022-11-29T00:26 (+7)
There's a time and place to discuss exceptions to ethics and when goals might justify the means, but this post clearly isn't it.
Other folks in this comment thread mentioned that Ollie's request doesn't require any long philosophical analyses; it just requires leaving out sentences that are hyperbole.
I want to separately bid for a norm on the EA Forum that we err on the side of "encouraging factual discussion at awkward times and in awkward places", as opposed to erring on the side of "people wait around for a maximally clear social sign al that it's Okay to voice their thoughts". If a post like this belongs on the EA Forum at all, then I think it should be fine to do our normal EA-Forum thing of nitpicking phrasings, asking follow-up questions, etc.
It's RP giving an update/statement that's legally robust
I don't think that in this case, saying false things improves RP's legal situation. I'd assume the goal is reputational (send the right social signals to EAs and random-journalists-and-social-media-users-paying-attention-to-EA), as opposed to legal.
But yes, there might be legal reasons to leave out the sentence altogether, if the alternative is to try to hammer out a much more concrete and detailed version of the sentence? Also, this is a co-written post, and it can be hard to phrase those in ways that are agreeable to every co-author.
Ozzie Gooen @ 2022-12-01T20:19 (+4)
I basically agree.
I personally mainly disagree with Oliver on the above thread - however, given that there is disagreement, it seems very healthy to me for there to be an open discussion on it.
In this case the issue doesn't seem scary to discuss publicly. If this were about a much more directly controversial and serious issue, say about public allegations about individuals, that's where I'd prefer trying to begin it privately first.
> I don't think that in this case, saying false things improves RP's legal situation. I'd assume the goal is reputational
I personally didn't see this as a legal statement, as much as a public statement meant for the community at large.
Linch @ 2022-11-24T20:06 (+15)
Thanks for this comment.
(Speaking for myself) I saw an earlier version of this post and thought that "strongest possible terms" didn't really make sense but didn't speak up. In retrospect this was a mistake.
I'm not sure when to surface "that doesn't feel exactly right" intuitions and speak up at all the correct times, since I feel like I have this intuition very often and if I comment all the time about them I'll come across as really nitpicky/uninteresting. And it's really hard to triage, like this comes up so often that I can't just do a (even a rushed and intuition-driven) cost-benefits analysis each time.[1]
Another salient example of a mistake in this genre was the FTX commercial. I thought it didn't make sense as an argument but like most commercials don't make sense on a literal level and I didn't really think too hard about why the FTX commercial was more deceptive than the usual "high-status unrelated activity! Buy our product!" line of messaging.
But again, it's unclear to me on how much to pay attention and when.
- ^
EDIT: One heuristic I try to go through is something like "is this mistake central to the argument?" But as established in our other comments, this itself can be a crux. Like I don't think the exact details of condemnations of FTX is central to this piece, whereas you do.
Linch @ 2022-11-24T20:17 (+10)
Given that I have been encouraging lots of people to write more about the FTX situation, I want to clarify that I have a dispreference for posts like this.
(Speaking for myself) I'm pretty confused why you think this post is net negative (which I interpret "dispreference" to mean). I think the additional informational value the post has in the first paragraph is low, while the rest of the post clearly has a lot more content that's salient to collaborators/funders/future employees etc, as well as help people unaffiliated with RP orient on things that's less related to the failures of FTX itself (e.g. in terms of risk management).
I'm not disputing that the post might be net negative (consequentialist morality is hard), I'm just surprised that all of your evidence seems to come from one paragraph (and in my opinion, the least interesting one).
Habryka @ 2022-11-24T20:30 (+6)
Sorry, I think the key thing to evaluate is the counterfactual (like, I can say "I have a dispreference for food like this", which means I would prefer other food more, while also not thinking it's worth cooking completely new food, and would definitely prefer it over no food at all).
I think the post is net-positive compared to no post at all from Rethink, and I think the information is helpful.
I think it's easy to improve the post by a lot by being more careful with words. I also separately think (even outside of situations like this, but even more so in this situation) that large co-authored "statements" have pretty broad distortionary effects and I think often say things that nobody really believes (by introducing a new, not clearly defined, "we"), and I have a longer-running policy to encourage people to use more "I" statements, so I do think there is a decently large negative component here.
anonymous6 @ 2022-11-25T07:39 (+5)
I think when most people say “unequivocally” and “all”, they almost always mean “still maybe some exceptions” and “almost all”. If you don’t need to make mathematical/logical statements, which most people don’t, then reserving these words to act as universal quantifiers is not very useful. I used to be annoyed by this but I’ve learned to accept it.
Habryka @ 2022-11-25T08:13 (+2)
I feel confused here. Why would you use the word "unequivocally" to mean "still maybe some exceptions", that's like, almost literally the opposite of what it means?
I agree that this is still how many people would use the word, but I think the reason why this happens is not because people aren't thinking carefully, it's because when you use the word "unequivocally", without actually meaning "unequivocally", there is a rhetorical trick that gives your words more weight, and makes you seem like you have conviction, in a way that tricks many readers into giving you credit for your conviction, even if they would on-reflection also think you likely don't mean it.
I also separately think that within EA, especially for ethical statements where a lot of EAs claimed expertise lies, I think it's just really valuable to be precise with the words you use. A lot of what makes EA valuable for me is in exactly that kind of reasoning (a place to actually think carefully about when it's correct to lie, instead of trying to oversimplify the rules of ethics into a something that doesn't actually end up having to do much with my daily decision making).
Separately, I also think the risk of using imprecise language when condemning something, or threatening substantial social censure, is greatly magnified. I think being polemic here creates much worse group dynamics than for most other forms of speech, and there are just too many instances of people (including within EA) indeed taking things like this seriously and then engaging in substantial misguided censure on the basis of this (or even worse, use the conviction as an excuse to attack their political enemies, but fall back on "well, we didn't mean that literally" when it affects their allies).
anonymous6 @ 2022-11-25T09:24 (+4)
I would be inclined to replace “not thinking carefully” with “not thinking formally”. In real life everything tends to have exceptions and this is most people’s presumption, so they don’t feel a need to reserve language for the truly universal claims which are never meaningful.
Some people have practice in thinking about formal systems, where truly universal statements are meaningful, and where using different language to draw fine distinctions is important (“always” vs “with probability 1” vs “with high probability” vs “likely”).
Trying to push the second group’s norms on the first group might be tough even if perhaps it would be good.
Miguel @ 2022-11-25T11:25 (+1)
(a place to actually think carefully about when it's correct to lie, instead of trying to oversimplify the rules of ethics into a something that doesn't actually end up having to do much with my daily decision making).
You mean you think that there is a position where lying is acceptable? Please explain further..
Habryka @ 2022-11-25T18:50 (+9)
I mean, I've used the classical example of lying to the gestapo officers at your door about whether you are hiding jews in your attic a few different times in this thread. Similar situations are not that unlikely to arise when trying to give generalized advice to a community of thousands, distributed across the world.
Miguel @ 2022-11-24T21:31 (+5)
In my experience as an accountant for 10 plus years, never has any kind of fraud or misappropriation have been accepted. However big or small deception is, it will always and unequivocally be not tolerated.
Habryka @ 2022-11-24T22:00 (+21)
Seems kind of crazy to me, for what it's worth. Really seems like if someone's life was at stake, that you should do something else than just "not tolerate it".
Like, we do live in a world where our grandparents (or parents) were sometimes faced with the real decision on whether to lie to gestapo officers about the jews in their basement. Please, if you are an accountant and you have to do some slight tweaking of numbers to save the jews in your basement, please do that. This is not a pure hypothetical, misappropriating funds under immoral and illegal regimes is an actual decision that people alive right now, and people in the future will face.
Maybe it is the right call for you to swear some undying oath to the accounting principles, but I don't think most people should do that.
Ozzie Gooen @ 2022-11-25T01:04 (+10)
In my experience as an accountant for 10 plus years
My impression is that in the business world, "fraud" is pointing at a cluster of actions that's more specific than most simple definitions. I'd assume that there are incredibly few cases where businesses are actually in situations where they feel they need to commit fraud to save a life or similar, especially in Western countries.
I wouldn't be surprised if there were even legal loopholes for these extreme hypothetical-like situations. Like, the person could say they were under effective duress.
My guess is that basically any of us would agree to hypotheticals or thought experiments that were wild enough. Like, "What if you knew with absolute certainty that America would be completely nuked unless your business commits a tiny amount of fraud? Would you do it then?"
Kirsten @ 2022-11-25T21:09 (+4)
I'm confused. Your comment makes it sounds like someone reading this post might one day have to decide whether to commit fraud in order to save a life or do something else extremely morally important.
I can't think of any examples of that in a democracy in the last 50 years*. Can you? If it's so rare that neither of us can think of an example, I think condemning fraud unequivocally is the way to go, to avoid any confusion.
*I actually can't think of any examples at all, but think this kind of example would be most relevant.
Linch @ 2022-11-25T22:30 (+15)
I'm confused about this comment tbh. I can't tell if we just have very different life experiences or if there is some cognitive fallacy thing going on where it's easier to generate examples for my position than examples against my position.
For example, (in my family's lore) my grandfather was asked to cover up a (as I understand it) minor instance of corruption by his superior. He refused to do so, and was majorly screwed over during Cultural Revolution times as a result. Now this example isn't a clean one since "doing the right thing, even when it's hard" here pointed against fraud, and he in fact did not choose to do so. But I think I would not have faulted him if he chose protecting his family over loyalty to the party for some pretty minor thing. Particularly since his actual choice could easily have counterfactually resulted in my own non-existence.
As another example, at least some forms of American whistleblower animal activism involve skirting the edges of ag-gag laws, which may involve falsifying documents to be allowed access to factory farms to be able to film atrocities. Now maybe their moves here are unethical (I personally would hesitate to lie to an employer to that extent, though it's unclear if this is judicious moral reasoning or just insufficient bravery). But I think this is at least the type of question that's subject to debate, and I would not want to condemn such actions without substantially more detailed thinking and debate.
Note however that the post above only condemns fraud at FTX, not globally.
Kirsten @ 2022-11-26T15:20 (+6)
Those sound closer to what I'd think of as fraud, although I also wouldn't encourage the fraudulent option in either case! But no, I really didn't think of either of those examples when trying to generate examples of fraud that people might think is morally good
Habryka @ 2022-11-25T21:37 (+14)
Yep, I think we are indeed living in pretty interesting times!
I think both my parents and grandparents in Germany faced similar decisions, and definitely within the last 50 years (half of Germany was under USSR occupancy around 30 years ago!).
Something being "a democracy" is I think only a weak filter on the actual need for this. The U.S. had various anti-gay and anti-communism periods in which I think lying to the government or committing fraud seemed very likely the way to go (e.g. I personally am sympathetic, though very tentatively still overall opposed to the people who committed minor fraud to e.g. dodge the draft for the vietnam war, and definitely in favor of military personnel who lied about their sexual orientation for most of the last 50 years).
I also think racism was a serious enough problem within the last 50 years that I would not judge someone if they had lied about their race in various government documents in order to gain access to fair treatment, and of course anti-semitism has been rampant enough that lying about your religious affiliation to the government or universities, even within the U.S. was an action I would not judge people too badly for.
It is indeed very easy for me to come up with a very long list of examples where committing minor fraud, with an intention of limiting the negative effects of it, was the ethical (or at least a defensible) choice. Happy to generate more if you want
(for some more examples:
- Given that many universities have a policy of expelling students with mental health problems I would not fault someone for lying to their university administration when they are asked about whether they have a history of depression or are using anti-depressants.
- I think various occupational licensing regimes are crazy enough that I would not fault someone for operating without a license for e.g. being a hairdresser, though I feel tentatively confused about this.
- Various parts of immigration law seem crazy enough to me that engaging with the government fully honestly here seems like it could seriously endanger your life or the lives of your children.
- Most recently the government had crazy enough lockdown restrictions, as well as rules about how vaccines should be distributed, that I think it was ethical to break those rules, which potentially would have involved lying to the government e.g. given that tons of vaccines were being thrown away, I think I support people who said that they hadn't received a second dose yet in order to get a booster, since the only reason why boosters weren't happening was crazy FDA shenanigans.
)
I also expect the world will end up more crazy in the future than it has been in the recent past. I assign substantial probability to additional widespread pandemics, major power wars, nuclear conflict and political polarization causing substantially more oppressive regimes in the west (not like, super high probabilities to any of these, but I think the last 20-30 years, at least in the western world, have been a very calm time in terms of radical global changes, and I do not expect that period of relative calmness to persist for that much longer). I think all of these are also associated with conflict that gives substantially more justification to lying to major institutions and governments and other things that might be classified as fraud.
Kirsten @ 2022-11-25T22:06 (+14)
lying to major institutions and governments and other things that might be classified as fraud.
Okay, that's where I misunderstood you - lying to the government is not what I think of when condemning fraud. I think lying to government can be very serious and is often done too flippantly, but it's not what I thought you were defending. In my mind, fraud is primarily about stealing money, and I just couldn't figure out how you were defending that.
Habryka @ 2022-11-26T00:15 (+4)
Yeah, in my mind fraud is not tied to financial benefit (and I don't think it is legally either, any kind of personal benefit will suffice). I also think that if the government is asking you as a jew or as a gay person to pay extra taxes or pay other types of fines (as has happened in many jurisdictions, including in the last 50 years), I also think you are justified in lying in response to that, even if it would be for financial gain.
Like, we had sodomy laws in the U.S. until 2003, many of which carried fines (and were enforced within the last 50 years). I feel a bit confused about what you are allowed to do in order to avoid self-incrimination, but it wouldn't surprise me that much if various people were pushed into various minor forms of fraud in order to avoid those fines and criminal penalties.
Separately, if there are substantial swaths of the population trying to discriminate against jews or gay people economically, I also feel a decent amount of sympathy for people trying to avoid that by lying, even if it is for financial gain (like, if many of the most prominent scholarships to U.S. universities explicitly exempt jewish or gay or black people, as I think wasn't too uncommon within the last 50 years, then I feel pretty sympathetic to someone lying about their race/sexual-orientation/religion in order to still claim eligibility for those scholarships, though like, I do sure think this case isn't obvious, and many quite nearby cases feel straightforwardly quite immoral).
Linch @ 2022-11-25T22:17 (+4)
I agree with the general tenor of your comment, but a) I think many of the examples you listed would not be considered financial fraud and b) I disagree on the object-level with several of your examples:
I also think racism was a serious enough problem within the last 50 years that I would not judge someone if they had lied about their race in various government documents in order to gain access to fair treatment
I'm not sure which are the specific examples you're thinking of, but I know some people have advocated that Asian American students lie about their race and e.g. pretend to be white to gain college admissions. I think this is probably immoral and I do not recommend this.
I think I support people who said that they hadn't received a second dose yet in order to get a booster, since the only reason why boosters weren't happening was crazy FDA shenanigans.
I think I'm on the other side here, at least for healthy adults, and this was directly decision-relevant in me waiting to get a booster (but I will not fault people who have made a different choice here, particularly ones with medical necessity or who are caretakers for more vulnerable people).
Habryka @ 2022-11-26T00:34 (+5)
a) I think many of the examples you listed would not be considered financial fraud
But nobody was specifically specifying financial fraud in any of this discussion, at least as far as I could tell. The OP talks about all fraud, and I haven't seen anyone narrow things down to just financial fraud. I agree that most of the things I talked about don't qualify as financial fraud, though I think I could also come up with many examples of it being justified to commit financial fraud within the last 50 years.
I'm not sure which are the specific examples you're thinking of, but I know some people have advocated that Asian American students lie about their race and e.g. pretend to be white to gain college admissions. I think this is probably immoral and I do not recommend this.
Yeah, to be clear, I think I would not support this in the present day, where the forces of discrimination seem substantially weaker. There were however periods where the disadvantage seemed so large that I would have supported active civil disobedience against those rules.
I think I'm on the other side here, at least for healthy adults, and this was directly decision-relevant in me waiting to get a booster (but I will not fault people who have made a different choice here, particularly ones with medical necessity or who are caretakers for more vulnerable people).
Yep, I think this is a reasonable ethical position. I felt really quite on the edge about it, and still do, though I do feel pretty strongly that it's not an obvious case and would feel bad about our community condemning it strongly, at least given my current understanding.
Making this account feels almost as bad as pulling a "Holden", @ 2022-11-25T23:21 (+1)
(I can’t believe I’m piling onto Kirsten with Oliver and Linch, weird).
I think the topic here is greater than financial fraud.
Some of these answers feel a lot like people who haven’t been in high agency positions with enormous responsibility and pressure.
I can think of several situations and people you would respect, who have to take special actions to shield their organization from unreasonable and disproportionate legal harm (often where they are personally uninvolved). These leaders are still good people, and try to do the best they can. Examples:
- Due to difficult and bizarre legal issues, people have to characterize a highly dysfunctional and unreasonable employee's misconduct in a way that anticipates post-termination legal exposure
- Desperate and legally dubious manufacturing of stop-gap medical equipment in an extraordinary pandemic where major governments were grossly irresponsible—in this situation the CEO took on personal responsibility for the entire organization.
None of these dilemmas are limited to EA[1], it’s probably the opposite.
- ^
Although we are learning that EA seems to actually mean to a lot of people “a place where I can build a career and get resources when things are going well, and then personally evacuate and attribute responsibility to others when there’s a problem”. This is not at all unnoticed.
Miguel @ 2022-11-26T02:07 (+1)
It is very weird that this comment got downvoted so much even though that this was my experience in the business world. I'm getting the impression that many do not understand fraud and that is why it keeps on repeating like a pattern that affects so many people negatively.
I would appreciate proper criticisms so I can explain further why fraud is unacceptable at all levels especially in organizational or business setting.
Miguel @ 2022-11-26T02:12 (+1)
okay, I found an earlier comment.
so the choice of words was the problem. hmmm. sorry for that I guess. At least we are on the same page as to where should we place our view on fraudulent behavior.
RedStateBlueState @ 2022-11-25T22:22 (+4)
What is a statement you'd want instead of that?
I can't think of a way to phrase a similar statement that wouldn't come off as horrendous when read by the public. Even if this is mostly an EA-facing post (and I'm not sure this is true), the public is inevitably going to find it and if it says anything like "fraud may be warranted in some special circumstances", Rethink Priorities is in big trouble.
If you have a better way of phrasing it that is more accurate and doesn't come off wrong, I'd be glad to hear it. Otherwise, I think this is a good statement. Most EAs will probably read through the word "unequivocally", and I would encourage further discussion on the tolerable bounds of behavior in pursuit of EA ideals, but this statement is not the place for that.
Habryka @ 2022-11-26T00:29 (+7)
I think it's pretty straightforward. I think some of the other announcements got this right. An alternative would be:
As far as I can tell a substantial fraction of the behavior that was going on at FTX was highly unethical and I condemn it in some of the strongest terms that I can think of. It currently looks like billions of dollars of customer deposits were misappropriated, in direct violation of FTX's terms of service and public statements by FTX and Sam Bankman Fried. This was a gross violation of trust and looks likely to be a straightforward example of extremely large-scale financial fraud, possibly the biggest case of financial fraud since Enron.
This is a tragedy, and I think FTX has caused great harm for the world. I am saddened to hear about the harm to many of FTX's depositors, investors and grant-recipients and am worried that I myself have contributed some to this harm by lending FTX indirectly some of my reputation, and I am reflecting deeply on how I should learn from this situation.
Or alternatively just a minimally edited version of the OP's sentence:
We strongly condemn the misuse of customer funds and trust that occurred at FTX.
Miguel @ 2022-11-26T02:17 (+1)
Given that I have been encouraging lots of people to write more about the FTX situation, I want to clarify that I have a dispreference for posts like this. I don't think they are terrible, but the kind of writing that I am interested in is people sharing observations and hypotheses and trying to do collective sense-making, and not public statements like this, which seem to only communicate information that helps people orient incidentally to their more social-reality based core content.
I agree with this criticism. I will do my best to reduce my inherent bias to my real world experiences and be on an inquisitive mode moving forward. Thanks Habryka.
Miguel @ 2022-11-26T02:25 (+1)
But a good question is how do you tackle something that is hard earned truths?
Like utopian visions ending up to corrupt all of its founders and the people who believed in it (communism in Russia, China or Cambodia)? Are we always going to leave open ends and keep inquiring even if we have basically tried to repeat the same errors that even have caused massive hurt, deaths and suffering as a civilization?
I see a trend now in fraud that we just as a society is very vulnerable to it - is it really to complex for us? or is it a but that we have in our cognitive ability and only few can understand significant effects of small deceptive actions that aggregates like snowballs growing to avalanches of suffering?
I would appreciate your comment on this Habryka. Thank you.
Holly_Elmore @ 2022-11-24T20:52 (+29)
I’m very proud to be part of an organization that was so prudent with money and managing risk.
Holly K @ 2022-11-23T23:23 (+28)
Thank you for being forthcoming and transparent. I am grateful to see people in positions of leadership doing this.
Janique @ 2022-11-28T11:12 (+26)
We have now also published a post about our impact, our strategy and our funding needs for 2023.
[I work as RP's director of development.]
Janique @ 2022-11-28T11:26 (+28)
Also, we are looking for board members to improve financial and legal oversight of our organization and increase accountability for our leadership: https://careers.rethinkpriorities.org/en/jobs/78036 (apply by Jan 13, 2023)
Artūrs Kaņepājs @ 2022-11-24T07:55 (+25)
Reassuring to read that RP treated the pledged or anticipated crypto donations with caution, and about the crisis management exercises / stress tests. Perhaps others organisations can learn. Thanks.
Kate Tran @ 2022-11-24T04:43 (+23)
This is the very beginning of forward leading in crisis, and individually, I appreciate the transparent communication in clarification of RP's current situation and strategies for the short term plan. Especially glad to be reassured that the current team is not affected by layoffs/cutdown in salaries. Though, I'm curios to know if this mean there will be potential on-going hiring freeze for certain positions? If so, which roles are to be expected on pause at RP's, or generally, in other longtermism-focused orgs in the Bay Area?
Peter Wildeford @ 2022-11-24T04:50 (+8)
We were planning on opening some longtermism research roles next month that we are now going to pause instead, but I expect we will still open these roles - just later in 2023. I'm not able to speak to what other organizations are doing.
Kate Tran @ 2022-11-24T05:13 (+2)
Thank you, Peter, there seems to be promising hope for longtermism from RP.
It's unfortunate that due to this collapse, longtermism projects have been affected majorly and I worry that the damage from FTX may cause additional neglectedness or potential skepticism in future funding from interested parties. My speculation is that a large group of orgs will likely to pause funding on longtermism and focus more on how to strategically build credibility and trust in order to raise more support from external funders, at least for the upcoming 6 months.
ChanaMessinger @ 2023-01-02T14:59 (+4)
As a part of regular crisis management exercises, we also engaged in an internal simulation in August around the possibility of FTX funds no longer being available
So cool! Is there any writeup of your thoughts here or how you approach these exercises?