Explicit Ethics
By Richard Y Chappell🔸 @ 2024-08-20T16:17 (+12)
This is a linkpost to https://www.goodthoughts.blog/p/explicit-ethics
A couple of recent posts by other academics put me in mind of my old take on reactive vs goal-directed ethics. First, Setiya writes, in On Being Reactive:
Philosophers often write as if means-end reason were the factory setting for human agency… It’s not my experience and I doubt it’s yours… [Arational action] pervades our interaction with others. We are often guided by emotion, not beliefs about the best means to our ends. Instrumental reason is not a default possession but a hard-won aspiration.
I think this is at least as true of much moral action as it is of the rest of our lives. The perennial complaint motivating effective altruism is that most people don’t bother to think enough about how to do good. Many give to a charity when asked, without any apparent concern for whether a better alternative was available. (And many others, of course, aren’t willing to donate at all—even as they claim to care about the bad outcomes they could easily avert.)
Being at all strategic or goal-directed in one’s moral efforts seems incredibly rare, which is part of what makes effective altruism so non-trivial (alongside how unusual it is to be open to any non-trivial degree of genuinely impartial concern—extending even to non-human animals and to distant future generations). Many moralists have lamented others’ lack of altruism. The distinctive lament of EAs is that good intentions are not enough—most people are also missing instrumental rationality.
This brings me to Robin Hanson’s question, Why Don’t Gamers Win at Life?:
We humans inherit many unconscious habits and strategies, from both DNA and culture. We have many (often “sacred”) norms saying to execute these habits “authentically”, without much conscious or strategic reflection. (“Feel the force, Luke.”) Having rules be implicit makes it easier to follow these norms, and typical life social relations are complex and opaque enough to also make this easier.
Good gamers then have two options: defy these norms to consciously calculate life as a game, or follow the usual norm to not play life as a game.
This suggests a novel explanation of why some people hate effective altruism. EA is all about making ethics explicit, insofar as is possible. (I don’t think it’s always possible. Longtermist longshots obviously depend on judgment calls and not just simple calculations. Even GiveWell just use their cost-effectiveness models as one consideration among many. That’s all good and reasonable. Both still differ strikingly from folks who refuse to consider numbers at all.)
Notoriously, EA appeals disproportionately to nerdy analytic thinkers—i.e., the sorts of people who are good at board games. Others may be generally suspicious of this style of thinking, or specifically hostile to replacing implicit norms with explicit ones. One can hypothesize obvious cynical reasons that could motivate such hostility. What I’m curious to consider now is: do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?
Thoughts welcome.
Jamie Elsey @ 2024-08-20T17:56 (+8)
I think part of the concern is that when you try to make ethics explicit you are very likely to miss something, or a lot of things, in the 'rules' you explicitly lay down. Some people will take the rules as gospel, and then there will also be a risk of Goodharting.
In most games there are soft rules beyond the explicit rules that include features that are not strictly part of the game and are very hard to define, such as good sportsmanship, but really are a core part of the game and why it is appreciated. Many viewers don't enjoy when a player does something that is technically allowed but is just taking advantage of a loophole in the explicit rules and not in the spirit of the game, or misses the point of the game (an example from non-human game players is that AI speedboat that stopped doing the actual race and starts driving round in circles to maximise the reward. We like it as an example of reinforcement learning gone wrong, but it's not what we actually want to watch in a race). People who only stick to the exactly explicit laws tend to be missing something/be social pariahs who take advantage of the fact that not all rules are or can be written down.
Richard Y Chappell🔸 @ 2024-08-20T18:24 (+4)
Yeah, that seems right as a potential 'failure mode' for explicit ethics taken to extremes. But of course it needs to be weighed against the potential failures of implicit ethics, like providing cover for not actually doing any good.
David_Moss @ 2024-08-20T19:10 (+6)
do you think there are principled reasons to think that the more “explicit” ethics of effective altruists is actually a bad thing? Or should we take this causal explanation to be, in effect, a debunking explanation of why many people are unreasonably opposed to EA (and to goal-directed ethics more generally)?
We discuss this in our preprint.
We find that people evaluate those who deliberate about their donations less positively (e.g. less moral, less desirable as social partners) than those who make their donations based on an empathic response. But a possible explanation of this response is that people take these different approaches to be signals about the character of the other person:
Namely, donating empathically may signal that one has good moral character and is a valuable social partner, because reacting empathically communicates an inclination to help those in need and a reliable motivation to behave prosocially. Supporting this, research has found that people infer that those who rely on emotion are more likely to cooperate and are more likely to feel emotions like empathy (Levine et al., 2018). Additionally, research has shown that donors who experience greater empathy are perceived to have a better moral character, and that this effect is reduced when the emotion felt does not lead to prosocial behavior (Barasch et al., 2014).
In contrast, deliberating about cost-effectiveness may be perceived as a weaker indicator of prosociality, as it suggests that donors are motivated more by pragmatic considerations than by concern for recipients’ feelings. As a result, deliberative donors might withhold assistance in situations where the aid is not deemed cost-effective enough, despite a compelling emotional appeal from the individual in need. This could lead observers to infer that deliberative donors are more cold, calculating, and pragmatic, with weaker commitment to interpersonal relationships. Similarly, research on judgments of individuals who make consequentialist decisions—such as helping a greater number of strangers rather than a single family member—indicates that they are less favored as partners in close relationships (e.g., friend, spouse) and are perceived as less loyal (Everett et al., 2018). Moreover, research has found that helping strangers instead of close others (e.g., friends, family) is deemed morally unacceptable and may have negative relational consequences (Law et al., 2022; McManus et al., 2020).
I think this suggests that individuals may have good reasons for their negative evaluations, as people who deliberate about the cost-effectiveness of their aid may be less likely to provide aid in the kinds of typical cases which people normally care about, than someone who aids due to an empathic response (e.g. they may be less likely to help the person themselves or someone close to them if they are in need). But, of course, this doesn't show that deliberators are worse, all things considered, so I think this remains quite viable as a debunking explanation.
Richard Y Chappell🔸 @ 2024-08-20T19:43 (+4)
Interesting, thanks for the link! I agree that being a useful social ally and doing what's morally best can come apart, and that people are often (lamentably) more interested in the former.