SBF's comments on ethics are no surprise to virtue ethicists

By c.trout @ 2022-12-01T04:21 (+10)

This is a linkpost to https://www.lesswrong.com/posts/YhYfoGyXFbK9epxLG/sbf-s-comments-on-ethics-are-no-surprise-to-virtue-ethicists

DISCLAIMER: Although this is a criticism of the LW/EA community, I offer it in good faith. I don't mean to "take down" the community in any way. You can read this as a hypothesis for at least one cause of what some have called EA's emotions problem. I also offer suggestions on how to address it. Relatedly, I should clarify that the ideals I express (regarding how much one should feel vs how much one should be doing cold reasoning in certain situations) are just that: ideals. They are simplified, generalized recommendations for the average person. Case by case recommendations are beyond the scope of this post. (Nor am I qualified to give any!) But obviously, for example, those who are neurodivergent (e.g. have Aspergers) shouldn't be demeaned for not conforming to the ideals expressed here. Likewise though, it would be harmful to encourage those who are neurotypical to try to conform to an ideal better suited for someone who is neurodivergent: I do still worry we have "an emotions problem" in this community.

EDIT: Replaced the term "moral schizophrenia" with "internal moral disharmony" since the latter is more accurate and just. Thanks to AllAmericanBreakfast and Matt Goodman for highlighting this.

In case you missed it, amid the fallout from FTX's collapse, its former CEO and major EA donor Sam Bankman-Fried (SBF) admitted that his talk of ethics was "mostly a front," describing it as "this dumb game we woke Westerners play where we say all the right shibboleths and everyone likes us," a game in which the winners decide what gets invested in and what doesn't. He has since claimed that this was exaggerated venting intended for a friend audience, not the wider public. But still... yikes.

He also maintains that he did not know Alameda Research (the crypto hedge-fund heavily tied to FTX and owned by SBF) was over-leveraged, that he had no intention of doing anything sketchy like investing customers' deposits. In an interview yesterday, he generally admitted to negligence but nothing more. Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things. 

In what follows I will speculate about what might have been going on in SBF's head, in order to make a much higher confidence comment about the LW/EA community in general. Please don't read too much into the armchair psychological diagnosis from a complete amateur – that isn't the point. The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if EAs suffer (in varying degrees) from an internal disharmony between their reasons and their motives at higher rates than the general population. This is a form of cognitive dissonance that can manifest itself in a number of ways, including (I submit) flirts with Machiavellian attitudes towards ethics. And this is not good. To explain this, I first need to lay some groundwork about normative ethics.

Virtue Ethics vs. Deontology vs. Consequentialism

Yet Another Absurdly Brief Introduction to Normative Ethics (YAABINE)

The LW/EA forums are littered with introductions, of varying quality and detail, to the three major families of normative ethical theories. Here is one from only two weeks ago. As such, my rendition of YAABINE will be even briefer than usual, and focuses only on theories of right action. (I encourage checking out the real deal though: here are SEP's entries on virtue ethics, deontology and consequentialism). 

Virtue Ethics (VE): the rightness and wrongness of actions are judged by the character traits at the source of the action. If an action "flows" from a virtue, it is right; from a vice, wrong. The psychological setup (e.g. motivation) of the agent is thus critical for assessing right and wrong. Notably, the correct psychological setup often involves not excessively reasoning: VE is not necessarily inclined towards rationalism. (Much more on this below).

Deonotological Ethics (DE): the rightness and wrongness of actions are judged by their accordance with the duties/rules/imperatives that apply to the agent. The most well known form of DE is Kantian Ethics (KE). Something I have yet to see mentioned on LW is that, for Kant, it's not enough to act merely in accordance with moral imperatives: one's actions must also result from sound moral reasoning about the imperatives that apply. KE, unlike VE, is very much a rationalist ethics.

Consequentialism: the rightness and wrongness of actions are judged solely by their consequences – their net effect on the amount of value in the world. What that value is, where it is, whether we're talking about expected or actual effects, direct or indirect  – these are all choice points for theorists. As very well put by Alicorn:

"Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act consequentialism".

Finally, a quick word on theories of intrinsic value (goodness) and how they relate to theories of right action (rightness): conceptually speaking, much recombination is possible. For example, you could explain both goodness and rightness in terms of the virtues, forming a sort of Fundamentalist VE. Or you could explain goodness in terms of human flourishing (eudaimonia), which you in turn use to explain a virtue ethical theory of rightness – by arguing that human excellence (virtue) is partially constitutive of human flourishing. That would form a Eudaimonic VE (a.k.a. Neo-Aristotelian VE). Note that under this theory, a world with maximal human flourishing is judged to be maximally good, but the rightness and wrongness of our actions are not judged based on whether they maximize human flourishing!

Those are standard combinations but, prima facie, there is nothing conceptually incoherent about unorthodox recombinations like a Hedonistic VE (goodness = pleasure, and having virtues is necessary for/constitutive of pleasure), or Eudaimonic Consequentialism (goodness = eudaimonia, and rightness = actions that maximize general eudaimonia). The number of possible positions further balloons as you distinguish more phenomena and try to relate them together. There are, for example, many different readings of "good" and different categories of judgement (e.g. judging whole lives vs states of affairs at given moments in time; judging public/corporate policies vs character traits of individuals; judging any event vs specifically human actions). The normative universe is vast, and things can get complicated fast.

Here I hope to keep things contained to a discussion of right action, but just remember: this only scratches the surface!

LW and EA, echo chambers for Consequentialism

Why bother with YAABINE? 

Something nags me about previous introductions on the LW/EA forums: VE and DE are nearly always reinterpreted to fit within a consequentialist's worldview. This is unsurprising of course: both LW and EA were founded by consequentialists and have retained their imprint. But that also means these forums are turning into something of an echo chamber on the topic (or so I fear). With this post, I explicitly intend to challenge my consequentialist readers. I'm going to try and do for VE what Alicorn does for DE: demonstrate how virtue ethicists would actually think through a case.

What does that consequentialist co-opting look like? A number have remarked that, on consequentialist grounds, it is generally right to operate as if VE were true (i.e. develop virtuous character traits) or operate as if DE (if not KE) were true (i.e. beware means-ends reasoning, respect more general rules and duties) or a mix of both. In fact the second suggestion has a long heritage: this is basically just Rule Consequentialism.

On the assumption that Consequentialism is true, I generally agree with these remarks. But let's get something straight: you shouldn't read these as charitable interpretations of DE and VE or something. There are very real differences and disagreements between the three major families of theory, and it's an open question regarding who is right. FWIW currently VE has a slim plurality among philosophers, with DE as the runner up. Among ethicists (applied, normative, meta, feminist), it seems DE consistently has the plurality, with VE as runner up. Consequentialism is consistently in third place.

One way to think of the disagreement between the three families is in what facts they take to be explanatorily fundamental – the facts that form the basis for their unifying account of a whole range of normative judgments. Regarding judgments of actions, for DE the fundamental facts are about imperatives; for Consequentialism, consequences; for VE, the character of the agent. Every theory will have something to say about each of these terms – but different theories will take different terms to be fundamental. If it helps, you can roughly categorize these facts based on their location in the causal stream:

DE ultimately judges actions based on facts causally up-stream from the action (e.g. what promises were made?), along with, perhaps, some acausal facts (e.g. what imperatives are analytically possible/impossible for a mind to will coherently?); 

VE ultimately judges actions based on facts immediately up-stream (e.g. what psychological facts about the agent explain how they reacted to the situation at hand?); 

Consequentialism ultimately judges actions based on down-stream facts (e.g. what was the net effect on utility?).

This is an excessively simplistic and imperfect categorization, but hopefully it gets across the deeper disagreement between the families. Yes, it's true, they tend to prescribe the same course of action in many scenarios, but they very much disagree on why and how we should pursue said course. And that matters. Such is the disagreement at the heart of this post.

The problem of thoughts too many

Bernard Williams, 20th century philosopher and long-time critic of utilitarianism, proposed the following thought experiment. Suppose you come across two people drowning. As you approach you notice: one is a stranger; the other, your spouse! You only have time to save one of them: who do you save? Repressing any gut impulse they might have, the well-trained utilitarian will at this point calculate (or recall past calculations of) the net effect on utility for each choice, based on their preferred form of utilitarianism and... they will have already failed to live up to the moment. According to Williams, someone who seeks a theoretical justification for the impulse to save the life of a loved one has had "one thought too many." (Cf. this story about saving two children from an oncoming train: EY is very much giving a calculated justification for an impulse that only an over-thinking consequentialist would question).

Virtue ethicist Michael Stocker[1] develops a similar story, asking us to imagine visiting a sick friend at the hospital. If our motivation for visiting our sick friend is that we think doing so will maximize the general good, (or best obeys the rules most conducive to the general good, or best respects our duties), then we are morally ugly in some way. If the roles were reversed, it would likely hurt to find out our friend came to visit us not because they care about us (because they felt a pit in their stomach when they heard we were hospitalized) but because they believe they are morally obligated (they consulted moral theory, reasoned about the facts, and determined this was so). Here, as before, there seems to be too much thinking getting in the way of (or even replacing) the correct motivation for acting as one should.

Note how anti-rationalist this is: part of the point here is that the thinking itself can be ugly. According to VE, in both these stories there should be little to no "slow thinking" going on at all – it is right for your "fast thinking," your heuristics, to take the reins. Many virtue ethicists liken becoming virtuous to training one's moral vision – learning to perceive an action as right, not to reason that it is right. Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong. 

(If your heuristic is a consciously invoked utilitarian/deontological rule that you've passionately pledged yourself to, then the ugliness comes from the fact that your affect is misplaced – you care about the rule, when you should be caring about your friend. Just like cold reasoning, impassioned respect for procedure and duty can be appropriate at times; most times it amounts to obstinate rule-worship.)

Internal Moral Disharmony

In Stocker's terms, a theory brings on "moral schizophrenia" when it produces disharmony between our reasons/justifications for acting and our motivations to act. Since this term is outdated and misleading, let's call this malady of the spirit "internal moral disharmony." As Stocker describes it (p454):

An extreme form of [this disharmony] is characterized, on the one hand, by being moved to do what one believes bad, harmful, ugly, abasing; on the other, by being disgusted, horrified, dismayed by what one wants to do. Perhaps such cases are rare. But a more modest [disharmony] between reason and motive is not, as can be seen in many examples of weakness of the will, indecisiveness, guilt, shame, self-deception, rationalization, and annoyance with oneself.

When our reasons (or love of rules) fully displace the right motivations to act, the disharmony is resolved but we get the aforementioned ugliness (in consequentialist terms: we do our friend/spouse harm by not actually caring about them). We become walking utility calculators (or rule-worshipers). Most of us, I would guess, are not so far gone, but instead struggle with this disharmony. It manifests itself as a sort of cognitive dissonance: we initially have the correct motivations to act, but thoughts too many get in the way, thoughts we would prefer not to have. Stocker's claim is that Consequentialism is prone to producing this disharmony. Consequentialism has us get too accustomed to ethical analysis, to the point of it running counter our first (and good) impulses, causing us to engage in slow thinking automatically even when we would rather not. Resolving this dissonance is difficult – like trying to stop thinking about pink elephants. The fact we have this dissonance in our head makes us less than a paragon of virtue, but better than the walking utility calculator/rule-worshiper. 

Besides being a flaw in our moral integrity, this dissonance is also harmful to ourselves. (Which seems to lead hedonistic consequentialists to conclude we should be the walking utility calculators/rule-worshipers![2]) Too much thinking about a choice – analyzing the options along more dimensions, weighing more considerations for and against each, increasing the number of options considered – will dampen one's emotional attachment to the option chosen. Most of us have felt this before: too much back and forth on what to order at a restaurant leaves you less satisfied with whatever you eventually choose. Too much discussion about where to go, what to do, leaves everyone less satisfied with whatever is finally chosen. A number of publications in psychology confirm and elaborate on this readily apparent phenomenon (most famously Schwartz's The Paradox of Choice).[3][4][5][6][7][8][9] (Credit for this list of references goes to Eva Illouz, who finds evidence of this phenomenon in the way we choose our romantic partners today, especially men).[10]

Regularly applying ethical analysis to every little thing (which consequentialists are prone to do!) can be especially bad and dangerous. When ethical considerations and choices start to leave you cold, you will struggle to find the motivation to do what you judge is right, making you weak-willed (a "less effective consequentialist" if you prefer).[11] Or you might continue to go through the right motions, but it will be mechanical, without joy or passion or fulfillment. This is harm in itself, to oneself. But moreover, it leaves you vulnerable: this coldness is a short distance from the frozen wastes of cynicism and nihilism. When ethics looks like "just an optimization problem" to you, it can quickly start to look like "just a game." Making careful analysis your first instinct means learning to repress your gut sense of what is right and wrong; once you do that, right and wrong might start to feel less important, at which point it becomes harder to hang onto the normative reality they structure. You might continue to nominally recognize what is right and wrong, but feel no strong allegiance to rightness.

Given his penchant for consequentialist reasoning (and given that being colder is associated with being less risk-averse, making one a riskier gambler and more successful investor),[12] it would not surprise me to learn that SBF has slipped into that coldness at times. This profile piece suggests Will McAskill has felt its touch. J.S. Mill, notable consequentialist, definitely suffered it. There are symptoms of it all over this post from Wei Dai and the ensuing comment thread (see my pushback here). In effect, much of EY's sequence on morality encourages one to suppress affect and become a walking utility calculator or rule-worshiper (whether he intends this or not) – exactly what leads to this coldness. In short, I fear it is widespread in this community. 

EDIT: The term"widespread" is vague – I should have been clearer. I do not suspect this coldness afflicts the majority of LW/EA people. Something more in the neighborhood of 20~5%. Since it's not easy to measure this coldness, I have given a more concrete falsifiable prediction here. None of this is to say that, on net, the LW/EA community has a negative impact on people's moral character. On the contrary, on net, I'm sure it's positive. But if there is a problematic trend in the community (and if it had any role to play in the attitudes of certain high profile EAs towards ethics), I would hope the community takes steps to curb that trend.

The danger of overthinking things is of course general, with those who are "brainier" being especially susceptible. Given that this is a rationalist community – a community that encourages braininess – it would be no surprise to find it here at higher rates than the general population. However, I am surprised and disappointed that being brainy hasn't been more visibly flagged as a risk factor! Let it be known: braininess comes with its own hazards (e.g. rationalization). This coldness is another one of them. LW should come with a warning label on it!

A problem for everybody...

If overthinking things is a very general problem, that suggests thoughts too many (roughly, "the tendency to overthink things in ethical situations") is also general and not specific to Consequentialism. And indeed, VE can suffer it. In its simplest articulation, VE tells us to "do as the virtuous agent would do," but telling your sick friend that you came to visit "because this is what the virtuous agent would do" is no better than the consequentialists response! You should visit your friend because you're worried for them, full stop. Similarly, if someone was truly brave and truly loved their spouse, they would dive in to save them from drowning (instead of the stranger) without second thought.

Roughly, a theory is said to be self-effacing when the justification it provides for the rightness of an action is also recognized by the theory as being the wrong motivation for taking that action. Arguably, theories can avoid causing internal disharmony at the cost of being self-effacing. When Stocker first exposed self-effacement in Consequentialism and DE, it was viewed as something of a bug. But in some sense, it might actually be a feature: if there is no situation in which your theory recommends you stop consulting theory, then there is something wrong with that theory – it is not accounting for the realities of human psychology and the wrongness of thoughts too many. It's unsurprising that self-effacement should show up in nearly every plausible theory of normative ethics – because theory tends to involve a lot of thinking.

...but especially (casual) consequentialists.

All that said, consequentialists should to be especially wary of developing thoughts too many for a few reasons:

  1. Culture: the culture surrounding Consequentialism is very much one that encourages adopting the mindset of a maximizer, an optimizing number cruncher, someone who applies decision theory to every aspect of one's life. Consequentialism and rationalism share a close history after all. In all things morality related, I advise rationalists tone down these attitudes (or at least flag them as hazardous) especially around less sophisticated, more casual audiences.
  2. The theory's core message: even though most consequentialist philosophers advise against using an act-consequentialist decision procedure, Act Consequentialism ("the right action = the action which results in the most good") is still the slogan. Analyzing, calculating, optimizing and maximizing appears front and center in the theory. It seems to encourage the culture mentioned above from the outset. It's only many observations later that sophisticated consequentialists will note that, the best way for humans to actually maximize utility is to operate as if VE or DE were true (i.e. by developing character traits or respecting rules that tend to maximize utility). Fewer still notice the ugliness of thoughts too many (and rule-worship). Advocates of Consequentialism should do two things to guard their converts against, if nothing else, the cognitive dissonance of internal moral disharmony:
    1. At a minimum, converts should be made aware of the facts about human psychology (see §2.1 above) at the heart of this dissonance: these facts should to be highlighted aggressively. And early, lest the dissonance sets in before the reader's sophistication.
    2. Assuming you embrace self-effacement, connect the dots for your readers: highlight how Consequentialism self-effaces – where and how often it recommends that one stop considering Consequentialism's theoretical justifications for one's actions. 

Virtue ethicists, for their part, are known for forestalling self-effacement by just not giving a theory in the first place – by resisting the demand to give a unifying account of a broad range of normative judgments about actions. They tend to prefer taking things case by case, insistently pointing to specific details in specific examples and just saying that's what was key in that situation. They prefer studying the specific actions of virtuous exemplars and vicious foils. The formulation "the right action is the one the virtuous agent would take" is always reluctantly given, as more of a sketch of a theory than something we should think too hard about. This can make them frustrating theorists, but responsible writers (protecting you from developing thoughts too many), and decent moral guides. Excellent moral guides do less lecturing on moral theory, and more leading by example. Virtue ethicists like to sit somewhere in-between: they like lecturing on examples.

Note that, to proscribe consulting theory is not to prescribe pretense. Pretending to care (answering your friend "because I was worried!" when in fact your motivation was to maximize the general good) is just as ugly and will exacerbate the self-harm. That said theories can recognize that, under certain circumstances, "fake it 'til you make it" is the best policy available to an agent. Such might be the case for someone who was not fortunate enough to have good role models in their impressionable youth, and whose friends cannot/will not help them curb a serious vice. Conscious impersonation of the virtuous in an attempt to reshape their character might sadly be this person's best option for turning their life around. But note that, even when this is the case, success is never fully achieved if the pretense doesn't eventually stop being pretense – if the pretender doesn't eventually win themselves over, displacing the motivation for pretense with the right motivation to act (e.g. a direct concern for the sick friend).

Prevention and Cure

Adopting VE vs. sophisticating Consequentialism

Aware of thoughts too many, what should one do?

Well, you could embrace VE. Draw on the practical wisdom encoded in the rich vocabulary of virtue and vice, emphatically ending your reason-giving on specific details that are morally salient to the case at hand (e.g. "Because ignoring Sal would have been callous! Haven't you seen how lonely they've been lately?"). Don't fuss too much with integrating and justifying each instance of normative judgment with an over-arching system of principles: morally speaking, you're not really on the line for it, and its a hazardous task. If you are really so inclined, go ahead, but be sure to hang up those theoretical justifications when you leave the philosophy room, or the conscious self-improvement room. Sure, ignoring this sort of theoretical integration might[13] make you less morally consistent, but consistency is just one virtue: part of the lesson here is that, in practice, when humans consciously optimize very hard for moral consistency they typically end up making unacceptable trade-offs in other virtues. Rationalists seem especially prone to over-emphasize consistency.

Alternatively, you could further sophisticate your Consequentialism. With some contortion, the lessons here can be folded in. One could read the above as just more reason, by consequentialist lights, to operate as if VE were true: adopt the virtuous agent's decision procedure in order to avoid the harms resulting from thoughts too many and internal moral disharmony. But remark how thorough this co-opting strategy must now be: consequentialists won't avoid those harms with mere impersonation of the virtuous agent. Adopting the virtuous agent's decision procedure means completely winning yourself over, not just consciously reading off the script. Again, pretense that remains pretense not only fails to avoid thoughts too many but probably worsens the cognitive dissonance! 

If you succeed in truly winning yourself over though, how much of your Consequentialism will remain? If you still engage in it, you will keep your consequentialist reasoning in check. Maybe you'll reserve it for moments of self-reflection, insofar as self-reflection is still needed to regulate and maintain virtue. Or you might engage in it in the philosophy room (wary that spending too much time in there is hazardous, engendering thoughts too many). At some point though, you might find it was your Consequentialism that got co-opted by VE: if you are very successful, it will look more like your poor past self had to use consequentialist reasoning as a pretext for acquiring virtue, something you now regard as having intrinsic value, something worth pursuing for its own sake... Seems more honest to me if you give up the game now and endorse VE. Probably more effective too. 

Anyway, if you go in for the co-opt, don't forget, part of the lesson is to be mindful of the facts of human psychology. Invented virtues like the virtue of always-doing-what-the-best-consequentialist-would-do, besides being ad hoc and convenient for coddling one's Consequentialism, are circular and completely miss the point. Trying to learn such a virtue just reduces to trying to become "the most consequential consequentialist." But the question for consequentialists is precisely that: what character traits does the most consequential consequentialist tend to have? The minimum takeaway of this post: they don't engage in consequentialist reasoning all the time! 

Consequentialists might consider reading the traditional list of virtues as a time-tested catalog of the most valuable character traits (by consequentialist lights) that are attainable for humans. (Though see "objection h" here, for some complications on that picture).

Becoming virtuous

Whether we go with VE or Consequentialism, it seems we need to tap into whatever self-discipline (and self-disciplining tools) we have and begin a virtuous cycle of good habit formation. Just remember that chanting creeds to yourself and faking it 'til you make it aren't your only options! Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness, for providing us role models to take inspiration from, flawed characters to learn from, villains to revile. It's critical to see what honesty (and dishonesty), compassion (and callousness), courage (and cowardice) etc. look like in detailed, complex situations. Just knowing their dictionary definitions and repeating to yourself that you will (or won't) be those things won't get you very far. To really get familiar with them, you need to encounter many examples of them, within varied normative contexts. Again, the aim is to train a sort of moral perception – the ability to recognize, in the heat of the moment, right from wrong (and to a limited extent, why it is so), and react accordingly. In that sense, VE sees developing one's moral character as very similar (even intertwined with) developing one's aesthetic taste. Many of the virtues are gut reactions after all – the good ones.

 

  1. ^

    M. Stocker, The schizophrenia of modern ethical theories. Journal of Philosophy 73 (14) (1976), 453-466.

  2. ^

    If we take Hedonistic Consequentialism (HC) literally, the morally ideal agent is one which outwardly pretends perfectly to care (when interacting with agents that care about being cared about) but inwardly always optimizes as rationally as possible to maximize hedon, either by directly trying to calculate the hedon maximizing action sequence (assuming the agent's compute is much less constrained) or by invoking the rules that tend to maximize hedon (assuming the compute available to the agent is highly constrained). In other words, according to HC the ideal agent seems to be a sociopathic conartist obsessed with maximizing hedon (or obsessed with obeying the rules that tend to maximize hedon). No doubt advocates of HC have something clever to say in response, but my point stands: taking HC too literally (as SBF may have?) will turn you into a hedon monster.

  3. ^

    G. Klein, Sources of Power: How People Make Decisions (Cambridge, MA: MIT Press, 1999).

  4. ^

    T.D. Wilson and J.W. Schooler, “Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions,” Journal of Personality and Social Psychology 60(2) (1991), 181–92.

  5. ^

    C. Ofir and I. Simonson, “In Search of Negative Customer Feedback: The Effect of Expecting to Evaluate on Satisfaction Evaluations,” Journal of Marketing Research, 38(2) (2001), 170–82.

  6. ^

    R. Dhar, “Consumer Preference for a No-Choice Option,” Journal of Consumer Research, 24(2) (1997), 215–31.

  7. ^

    D. Kuksov and M. Villas-Boas, “When More Alternatives Lead to Less Choice,” Marketing Science, 29(3) (2010), 507–24.

  8. ^

    H. Simon, “Bounded Rationality in Social Science: Today and Tomorrow,” Mind & Society, 1(1) (2000), 25–39.

  9. ^

    B. Schwartz, The Paradox of Choice: Why More is Less (New York: Harper Collins, 2005)

  10. ^

    E. Illouz, Why love hurts: A sociological explanation (2012), ch. 3.

  11. ^

    We know from psychology that humans struggle with indecision when they lack emotions to help motivate a choice. See A. R. Damasio, Descartes’ error: emotion, reason, and the human brain (1994)

  12. ^

    B. Shiv, G. Loewenstein, A. Bechara, H. Damasio and A. R. Damasio, Investment Behavior and the Negative Side of Emotion. Psychological Science, 16(6) (2005), 435–439. http://www.jstor.org/stable/40064245

  13. ^

    This is an empirical question for psychologists: in practice, does the exercise of integrating your actions and judgments into a unifying theoretical account actually correlate with being more morally consistent (e.g. in the way you treat others)? Not sure. Insofar as brainier people are, despite any rationalist convictions they might have, particularly prone to engage in certain forms of irrational behaviour (e.g. rationalization) I'm mildly doubtful.


Matt Goodman @ 2022-12-01T12:01 (+14)

I'm quite skeptical of post-hoc articles with titles like 'X was no surprise', they're usually full of hindsight bias. Like, if it was no surprise, did you predict it coming? 

Although there's almost nothing about SBF here, is this part 1 of a series?

c.trout @ 2022-12-01T14:44 (+1)

You're right that post-hoc articles are usually full of hindsight bias, making them a lot less valuable. That's why I tried not to make the article about SBF too much (no this is not part 1 of a series). I laid that out from the beginning:

Please don't read too much into the armchair psychological diagnosis from a complete amateur – that isn't the point. 

If you want a prediction I give one right after this:

The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if many EAs suffer (in varying degrees) from "moral schizophrenia"

I reiterate this when I say "I fear it is widespread in this community" where "it" is a certain coldness toward ethical choices (and other choices that would normally be full of affect). 

SBF is topical and I thought this was a good opportunity to highlight this lesson about not engaging in excessive reasoning. But I agree my title isn't great. Suggestions?

AllAmericanBreakfast @ 2022-12-01T17:53 (+3)

The Mayo Clinic says of schizophrenia:

“ Schizophrenia is characterized by thoughts or experiences that seem out of touch with reality, disorganized speech or behavior, and decreased participation in daily activities. Difficulty with concentration and memory may also be present.”

I don’t see the analogy between schizophrenia and “a certain coldness toward ethical choices,” and if it were me, I’d avoid using mental health problems as analogies, unless the analogy is exact.

c.trout @ 2022-12-01T18:19 (+1)

The term is certainly outdated and an inaccurate analogy, hence the scare quotes and the caveat I put in the heading of the same name. It's the term that Stocker uses though and I haven't seen another one (but maybe I missed it). The description "tendency to suffer cognitive dissonance in moral thinking" is much more accurate but not exactly succinct enough to make for a good name. I'm open  to suggestions!

AllAmericanBreakfast @ 2022-12-01T20:38 (+3)

The term I'd probably use is hypocrisy. Usually, we say that hypocrisy is when one's behaviors don't match one's moral standards. But it can also take on other meanings. The film The Big Short has a great scene in which one hypocrite, whose behavior doesn't match her stated moral standards, accuses FrontPoint partners of being hypocrites, because their true motivations (making money by convincing her to rate the mortgage bonds they are shorting appropriately) don't match their stated ethical rationales (combating fraud).

On Wikipedia, I also found definitions from David Runciman and Michael Gerson showing that hypocrisy can go beyond a behavior/ethical standards mismatch:

According to British political philosopher David Runciman, "Other kinds of hypocritical deception include claims to knowledge that one lacks, claims to a consistency that one cannot sustain, claims to a loyalty that one does not possess, claims to an identity that one does not hold".[2] American political journalist Michael Gerson says that political hypocrisy is "the conscious use of a mask to fool the public and gain political benefit".[3]

I think "motivational hypocrisy" might be a more clear term than "moral schizophrenia" for indicating a motives/ethical rationale mismatch.

c.trout @ 2022-12-01T21:33 (+3)

Thanks for the suggestion. I ended up going with "internal moral disharmony" since it's innocuous and accurate enough. I think "hypocrisy" is too strong and too narrow: it's a species of internal moral disharmony (closely related to the "extreme case" in Stocker's terms), one which seems to imply no feelings of remorse or frustration with oneself regarding the disharmony. I wanted to focus on the more "moderate case" in which the disharmony is not too strong, one feels a cognitive dissonance, and one attempts to resolve the disharmony so as not to be a hypocrite.

AllAmericanBreakfast @ 2022-12-01T22:56 (+2)

I think that's fine too.

Linch @ 2022-12-01T22:17 (+2)

I think "hypocrisy" is too strong and too narrow

Fwiw I consider "hypocrisy" to be a much weaker accusation than "schizophrenia"

c.trout @ 2022-12-02T00:13 (+1)

I meant strong relative to "internal moral disharmony." But also, am I to understand people are reading the label of "schizophrenia" as an accusation? It's a disorder that one gets through no choice of one's own: you can't be blamed for having it. Hypocrisy, as I understand it, is something we have control over and therefore are responsible for avoiding or getting rid of in ourselves.

At most Stocker is blaming Consequentialism and DE for being moral schizophrenia inducing. But it's the theory that's at fault, not the person who suffers it!

Linch @ 2022-12-02T00:54 (+4)

Yeah I think this is fair. I probably didn't read you very carefully or fairly. However, it is hard to control connotations of words, and I have to admit I had a slightly negative visceral reaction for what I believed to be my sincerely held moral views (that I tried pretty hard to live up to, and I made large sacrifices for) medicalized and dismissed so casually. 

c.trout @ 2022-12-02T01:56 (+3)

Yikes! Thank you for letting me know! Clearly a very poor choice of words: that was not at all my intent!

To be clear, I agree with EAs on many many issues. I just fear they suffer from "overthinking ethical stuff too often" if you will.

Linch @ 2022-12-03T02:16 (+4)

Thanks for responding! (upvoted)

On my end, I'm sorry if my words sounded too strong or emotive.

Separately, I strongly disagree that we suffer from overthinking ethical stuff too much. I don't think SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations. I would guess that if he actually consulted senior EA leaders or researchers on the morality of his actions, this would predictably have resulted in less fraud.

Linch @ 2022-12-04T01:27 (+2)

Meta: I couldn't figure out why the first chart renders with so much whitespace.

c.trout @ 2022-12-03T17:20 (+1)

No worries about the strong response – I misjudged how my words would be interpreted. I'm glad we sorted that out.

Regarding overthinking ethical stuff and SBF: 
Unfortunately I fear you've missed my point. First of all, I wasn't really talking about any fraud/negligence that he may have committed. As I said in the 2nd paragraph:

Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things. 

My subject was his attitude/comments towards ethics. Second, my diagnosis was not that:

SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations.

My point was that it's getting too comfortable approaching ethics like a careful calculation that can be dangerous in the first place – no matter how accurate the calculation is. It's not about missing some decimal points. Please reread this section if you're interested. I updated the end of it with a reference to a clear falsifiable claim.

Matt Goodman @ 2022-12-01T18:02 (+1)

Ok, I didn't pick up that was where the prediction was in the article. I think of (good) predictions as having a clear, falsifiable hypothesis.  Whereas this seems to be predicting ... that virtue ethicists continue believing whatever they already believed about EAs?

The reason I downvoted this article is the use of the term 'moral schizophrenia'. Even if it's not your term originally, I think using it is:

a) Super unclear as a descriptive term. I understand in mainstream culture it's seen as a kind of Jeckyll/Hyde split personality thing, so maybe it's meant to describe that. But I'm pretty sure that's an inaccurate description  of actual schizophrenia. 

b) Harmful to those who have schizophrenia when used in this kind of negative fashion. Especially as  it seems to be propagating the Jeckyll/Hyde false belief about the condition.

Lastly, the 'moral schizophrenia'/coldness described here seems much more like a straw-man of EAs than an accurate description of an EAs I've met. The EAs I know IRL are warm and generous towards their families and friends, and don't seem to associate being that way as at all incompatible with EA kind of reasoning. Sure, online and even irl discussions can seem dry, but it would be hard to have any discussions if we had to express, with our emotions, the magnitude of what was being discussed.

c.trout @ 2022-12-01T19:21 (+1)

Regarding the term "moral schizophrenia":
As I said to AllAmericanBreakfast, I wholeheartedly agree the term is outdated and inaccurate! Hence the scare quotes and the caveat I put in the heading of the same name. But obviously I underestimated how bad the term was since everyone is telling to change it. I'm open to suggestions! EDIT: I replaced it with "internal moral disharmony." Kind of a mouthful but good enough for a blog post. 

Regarding predictions:
You're right, that wasn't a very exact prediction (mostly because internal moral disharmony is going to be hard to measure). Here is a falsifiable claim that I stand by and that, if true, is evidence of internal moral disharmony:

I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.

More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.

The hypothesis for why I think this correlation exists is mostly at the end of here and here.

But more generally, must a criticism of/concern for the EA community come in the form of a prediction? I'm really just trying to point out a hazard for those who go in for Rationalism/Consequentialism. If everyone has avoided it, that's great! But there seems to be evidence that some have failed to avoid it, and that we might want to take further precautions. SBF was very much one of EA's own: his comments therefore merit some EA introspection. I'm just throwing in my two cents.

Regarding actual EAs:
I would be happy to learn few EAs actually have thoughts too many! But I do know it's a thing, that some have suffered it (personally I've struggled with it at times, and it's literally in Mill's autobiography). More generally, the ills of adopting a maximizer's mindset too often are well documented (see references in footnotes). I thought it was in the community's interest to raise awareness about it. I'm certainly not trying to demonize anyone: if someone in this community does suffer it, my first suspect would be the culture surrounding/theory of Consequentialism, not some particular weakness on the individual's part.

Regarding dry discussion on topics of incredible magnitude:
That's fair. I'm not saying being dry and calculating is always wrong. I'm just saying one should be careful about getting too comfortable with that mindset lest one start slipping into it when one shouldn't. That seems like something rationalists need to be especially mindful of.

AllAmericanBreakfast @ 2022-12-01T13:18 (+4)

I’m confused about how you’re dividing up the three ethical paradigms. I know you said your categories were excessively simplistic. But I’m not sure they even roughly approximate my background knowledge of the three systems, and they don’t seem like places you’d want to draw the boundaries in any case.

For example, my reading of Kant, a major deontological thinker, is that one identifies a maxim by asking about the effect on society if that maxim were universalized. That seems to be looking at an action at time T1, and evaluating the effects at times after T1 should that action be considered morally permissible and therefore repeated. That doesn’t seem to be a process of looking “causally upstream” of the act.

When I’ve seen references to virtue ethics, they usually seem to involve arbitrating the morality of the act via some sort of organic discussion within one’s moral community. I don’t think most virtue ethicists would think that if we could hook somebody up to a brain scrambler that changed their psychological state to something more or less tasteful immediately before the act, that this could somehow make the act more or less moral. I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

And of course, we do have rule utilitarianism, which doesn’t judge individual actions by their downstream consequences, but rules for actions.

Honestly, I’ve never quite understood the idea that consequentialism, deontology, and virtue ethics are carving morality at the joints. That’s a strong assertion to make, and it seems like you have to bend these moral traditions to the categorization scheme. I haven’t seen a natural categorization scheme that fits them like a glove and yet beating distinguishes one from the other.

c.trout @ 2022-12-01T15:44 (+1)

You're absolutely right to criticize that section! It's just not good. I will add more warning labels/caveats to it ASAP. This is always the pitfall of doing YAABINE.

That said, I do think the three families can be divided up based on what they take to be explanatorily fundamental. That's what I was trying to do (even though I probably failed). The slogan goes like this: VE is "all about" what kind of person we should be, DE is "all about" what duties we have, and Consequentialism is "all about" the consequences of our actions. Character, duty, consequences – three key moral terms. (And natural joints? Who knows). Theories from each family will have something to say about all three terms, but each family of theory takes a different term to be explanatorily fundamental.

So you're absolutely right that, in their judgments of particular cases, they can all appeal to facts up and down the causal stream (e.g. there is no reason consequentialists can't refer to promises made earlier when trying to determine the consequences of an action). Maybe another way to put this: the decision procedures proposed by the various theories take all sorts of facts as inputs. You give a number of examples of this. But ultimately, what sorts of facts unify those various judgments under a common explanation according to each family of theory? That's what I was trying to point at. I thought one way to divvy those explanatorily fundamental facts was by there position along the causal stream but maybe I was wrong. I'm really not sure!

Unrelated reply:

I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

I completely agree that actual virtue ethicists would not do so, but the theory many of them are implicitly attached to ("do as the virtuous agent would do, for all the reasons the virtuous agent would do it") does seem to judge people based on how you were feeling/what you were thinking right before you did it.

AllAmericanBreakfast @ 2022-12-01T17:50 (+3)

Thanks for clarifying!

The big distinction I think needs to be made is between offering a guide to extant consensus on moral paradigms, and proposing your own view on how moral paradigms ought to be divided up. It might not really be possible to give an appropriate summary of moral paradigms in the space you’ve allotted to yourself, just as I wouldn’t want to try and sum up, say, “indigenous vs Western environmentalist paradigms” in the space of a couple paragraphs.

anonymous6 @ 2022-12-01T09:03 (+3)

Trying to "do as the virtuous agent would do" (or maybe "do things for the sake of being a good person") seems to be a  really common problem for people.

Ruthless consequentialist reasoning totally short-circuits this, which I think is a large part of its appeal. You can be sitting around in this paralyzed fog, agonizing over whether you're "really" good or merely trying to fake being good for subconscious selfish reasons, feeling guilty for not being eudaimonic enough -- and then somebody comes along and says "stop worrying and get up and buy some bednets", and you're free.

I'm not philosophically sophisticated enough to have views on metaethics, but it does seem sometimes that the main value of ethical theories is therapeutic, so different contradictory ethical theories could be best for different people and at different times of life.

Noah Scales @ 2022-12-01T10:32 (+1)

I think there's something to be said for the value of self-interest in your thought experiment about the person saving their partner over a stranger. A broader understanding of self-interest is one that reflects a rational and emotionally aligned decision to serve oneself through serving another. Some people are important in one's life, and instrumental reasoning applies to altruistic consequences of your actions toward them. Through your actions to benefit them, you also benefit yourself.

With respect to love, trust, and especially in romance, where loyalty is important, self-interest is prevalent. A person falls "out of love" when that loyalty is betrayed, typically. Work against someone's self-interest enough, and their emotional feelings and attachment to you will fade.

All to say that consequentialism plays a role in serving self-interest as well as the interests of others. With regard to the dissonance it creates, in the case of manifesting virtues toward those who we depend on to manifest those virtues in return, the dissonance eases because those people serve our interests as we serve their interests.

c.trout @ 2022-12-01T15:00 (+2)

So I feel like your comment misses the point I was trying to make there (which means I didn't make it well enough – my apologies!) The point is not that consequentialists can't justify saving their spouse, as if they don't have the theoretical resources to do so. They absolutely can, as you demonstrate. The point is that in the heat of the moment, when actually taking action, you shouldn't be engaging in any consequentialist reasoning in order to decide what to do or motivate yourself to do it.

Or maybe you did understand all this, and you're just describing how consequentialism self-effaces? Because it recommends we adopt a certain amount of self-care/self-interest in our character and then operate on that (among other things)?

AllAmericanBreakfast @ 2022-12-01T21:38 (+3)

Based on this comment, I think I understand your original point better. In most situations, a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions from moment to moment. That would be crazy. I don’t need to consider the ethics of whether to take one more sip of my cup of tea.

But I think the way we resolve this is a common sense and practical form of consequentialism: a directive to apply moral thought in a manner that will have the most good consequences.

One way that might look is outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time, and invites applying modest limits to my obligation to give of my resources - time, money, and by extension thought.

So I think EA is already doing a pretty darn good job of limiting our need to think about ethics all the time. It’s just that when people do EA stuff, that’s what they think about. My personal EA involvement is only a tiny fraction of my waking hours, but if you thought of my EA posting as 100% of who I am, it would certainly look like I’m obsessed.

c.trout @ 2022-12-02T01:50 (+1)

...outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time

These are good things, and you're right to point them out! I certainly don't expect to find that every EA is a walking utility calculator – I expect that to be extremely rare. I also don't expect to find internal moral disharmony in every EA, though I expect it to be much less rare than walking utility calculators.

I just want to add one thing, just to be sure everything is clear. I'm glad you see how "a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions" (i.e. we should not be walking utility calculators). But that was just my starting point. Ultimately I want to claim that, whether you're in a "heat of the moment" situation or not, getting too used to applying a calculating maximizer's mindset in realms typically governed by affect can result in the following:

  1. Worst case extreme scenario: you become a walking utility calculator, and are perfectly at peace with yourself about being one. You could be accused of being cold, calculating, uncaring.
  2. More likely scenario: you start adopting a calculating maximizer's mindset when you shouldn't (e.g. when trying to decide whether to go see a sick friend or not) even though you know you shouldn't, or you didn't mean to adopt that mindset. You could be accused of being inadvertently cold and calculating – someone who, sadly, tends to overthink things.
    1. In such situations, because you've adopted that mindset, you will dampen your positive affective attachment to the decision you make (or the object at the center of that decision), even though you started with strong affect toward that decision/object. E.g. when you first heard your friend was in the hospital, you got a pit in your stomach, but it eventually wore away as you evaluated the pros and cons of going to see them or doing something else with your time (as you began comparing friends maybe, to decide who to spend time with). Whatever you do end up deciding to do, you feel ambivalent about it.
    2. Any cognitive dissonance you might have (e.g. your internal monologue sounds like this: "Why am I thinking so hard about this? I should have just gone with my gut"), and the struggle to resolve that dissonance only worsens 2.a. 
  3. Either way: in general, considerations that once engendered an emotional response now start leaving you cold (or colder). This in turn can result in:
    1. A more general struggle to motivate oneself to do what one believes one should do.
    2. Seeing ethics as "just a game."

Was that clear? Since it's getting clearer for me, I fear it wasn't clear in the post... It seems it needed to go through one more draft!

AllAmericanBreakfast @ 2022-12-02T02:20 (+2)

No worries!

I understand your concern. It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

My model is the reverse. Most people are somewhere between cold and unfeeling, and aggressively egocentric. Moral reflection builds into them some capacity for paying attention to others and cultivating empathy, which at first starts as an intellectual exercise and eventually becomes a deeply ingrained and felt habit that feels natural.

By analogy, you seem to see that moral reflection turns humans into robots. By contrast, I see moral reflection as turning animals into humans. Or think of it like acting. If you've ever acted, or read lines for a play in school, you might have experienced that at first, it's hard to even understand what your character is saying or identify their objectives. After time with the script, actors understand the goal and develop an intellectual understanding of their character and the actions they use to convey emotion. The greatest actors are perhaps method actors, who spend so much time with their character that they actually feel and think naturally like their character. But this takes a lot of time and effort, and seems like it requires starting with a more intellectualized relationship with their character.

As I see it, this is pretty much how we develop our adult personalities and figure out how to fit into the social world. Maybe I'm wrong - maybe most people have a nice well-adjusted sense of fellow feeling and empathy from the jump, and I'm the weird one who's had to work on it. If so, I think that my approach has been successful, because I think most people I know see me as an unusually empathic and emotionally aware person.

I can think of examples of people with all four combinations of moral systematization and emapthy: high/high, high/low, low/high, and low/low. I'm really not sure how the correlations run.

Overall, this seems like a question for psychology rather than a question for philosophy, and if you're really concerned that consequentialism will turn us into calculators, I'd be most interested to see that argument referring to the psych literature rather than the philosophy literature.

c.trout @ 2022-12-02T04:12 (+1)

It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

Moral calculation (and faking it 'til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:

Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...

If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz's Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading "The New Architecture of Romantic Choice or the Disorganization of the Will" (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you're interested.

AllAmericanBreakfast @ 2022-12-02T05:26 (+2)

I am really specifically interested in the claim you promote that moral calculation interferes on empathic development, rather than contributes to it or is neutral, on net. I don’t expect there’s much lit studying that, but that’s kind of my point. Why would we fee so confident that this or that morality has that or this psychological effect? I have a sense of how my morality has affected me, and we can speculate, but can we really claim to be going beyond that?

c.trout @ 2022-12-02T05:53 (+1)

I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don't expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don't.

If I'm right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn't the community consider taking some steps to curb that problematic tendency?

AllAmericanBreakfast @ 2022-12-02T06:10 (+2)

What you have is a hypothesis. You could gather data to test it. But we should not take any significant action on the basis of your hypothesis.

c.trout @ 2022-12-02T07:12 (+1)

Fair enough!

But also: if the EA community will only correct the flaws in itself that it can measure then... good luck. Seems short-sighted to me.

I may not have the data to back up my hypothesis, but it's also not as if I pulled this out of thin air. And I'm not the first to find this hypothesis plausible.

Noah Scales @ 2022-12-01T21:25 (+1)

Well, I think my mistake was to use the word "consequentialism" as if referring to the ethical theory. All I mean by consequentialism is thinking in terms of consequences, and having feelings about them. So, drawn by feelings to save one's spouse, that's very different than calculating the altruistic ideal of maximum benefit through the consequences of one's actions.

I'm not sure that concern about conscious effort toward thinking or overthinking is really moral schizophrenia. If you mean it's preferable to engage one's feelings or acting through instinct in ways that you identify as demonstrating character traits, well, I get it, but it's ... just about how the conscious mind should or should not work, or how the unconscious (system 1) should or should not work, and those are tricky things to navigate in the first place. I would settle for just doing the right thing, as I see it, whether that reflects an ethical system or not, or happens with sureness and swiftness or not, not because it shows virtue, but because, that means I didn't freeze up when I was supposed to be a hero.

People in tense situations freeze up, or talk to themselves, or think thoughts that feel disociated, and labeling that is less helpful than simply noting whether they were able to react in time to prevent a disaster, ie, saving someone's life. If they could, kudos. If they couldn't, how to help them do better next time? Sometimes being heroic or brave or loyal or whatever takes practice and some reassurance rather than judgment. Not all the time, though.