End-Relational Theory of Meta-ethics: A Dialogue
By Peter Wildeford @ 2016-06-28T20:11 (+38)
Originally written on 22 Oct 2013, and reposted here in 2016.
What is this?
This is an essay where I explain my opinions on meta-ethics through a guided discussion with a hypothetical inquisitor. It seemed like a much easier, more efficient, and more fluid format than writing a dozen tinier essays. The only downside is at 4797 words, it’s a bit lengthy…
Is that really an “FAQ”? Does anyone frequently ask these questions?
Well, erm… no. Not really. Let’s call it “Formatted Answers to hypothetical Questions”, then. People will understand.
So, you’ve finally settled a complex philosophical problem that really smart people withactual philosophical credentials have been debating for thousands of years, and now you’re going to tell us the answer to everything there is about morality?
Uh… well, when you put it that way.
It’s true that all you’ve ever done in philosophy was take Philosophy 101, right? Just that one class?
Yes. But I swear I’m widely read!
Introduction
Ok, I’ll ask you an easy question first. What is meta-ethics?
Meta-ethics is a study of philosophy about what we mean when we make moral statements. Like when someone says “murder is morally wrong” or “you ought not murder people”… meta-ethics is about what they mean by these statements; what these people are asking. What does it mean to be “morally wrong”? What does “ought not” mean? That sort of thing.
Why should I care?
That’s not a very nice way to start us off. Perhaps you shouldn’t care. In fact, asking whether you “should” care is a question that relies on meta-ethics – what does “should” mean in “why should I care”? We’ll be looking at questions like this and others throughout this FAQ.
Um, I meant is there any practical applications to meta-ethics and I think that’s a pretty reasonable question to ask.
Ok, so you’ve got a point. Having correct moral judgments is incredibly important and meta-ethics is all about how we can have correct moral judgments, if at all. But to be honest the entire question has gotten very needlessly confusing, so I don’t blame you for skipping it. Here, I’ll try to make it somewhat clear while trying to persuade you of my view.
Cool. I’ll keep reading. So what does it mean to be “morally wrong”? What does “ought” mean?
Jumping right to the good bit first, I see. I like it.
My view is that when someone says something is morally wrong, they are saying that something violates a particular standard or goal they use to evaluate those actions. Likewise, when someone says you “ought not” do something, they are saying that the action in question violates this particular standard or goal.
And what standard or goal do they use?
That’s just it. I think people use all sorts of standards or goals and different goals are implied by the conversation, even if they aren’t explicitly stated. Some use “fairness”, some are concerned about not harming people in their community, others might use utilitarianism. Yet more might care about what personally disgusts them or what is required by tradition. There’s all sorts of possibilities.
Let’s look at a few examples:
-
1A: Young children ought to eat their vitamins.
2A: People under the age of 21 ought not to drink alcohol.
3A: The “Bishop” chess piece ought be moved diagonally.
4A: You ought to keep your elbows off the table.
5A: Rich people ought to donate to charity.
-
These statements appear to be worded similarly, but the goals implied are very different…
-
1B: In order to be healthy, young children ought to eat their vitamins.
2B: In order to follow the law, people under the age of 21 ought not to drink alcohol.
3B: In order to play a game of Chess, the “Bishop” chess piece ought be moved diagonally.
4B: In order to be polite, you ought to keep your elbows off the table.
5B: In order to be moral, rich people ought to donate to charity.
-
My argument is that 1A and 1B are the same sentence and likewise for each pair. Therefore statements like 1A, while we understand them, are technically incomplete and missing the implied rationale. But it can’t be any rationale. For example, the following statements are not correct…
-
1C: In order to follow the law, young children ought to eat their vitamins.
2C: In order to play a game of Chess, people under the age of 21 ought not to drink alcohol.
3C: In order to be healthy, the “Bishop” chess piece ought be moved diagonally.
-
…Only certain goals make sense with certain statements. 1C is not the same as what is implied by 1A.
This view of meta-ethics is called “end-relational theory” and is usually attributed to Stephen Finlay and his paper “Oughts and Ends”.
The Meaning of “Moral”
How can the same word have multiple, contradictory meanings? Are there any other words like this?
I’ll suggest one. How about “sound”? Luke Meulhauser writes an essay on conceptual analysis where he uses this example:
If a tree falls in the forest, and no one hears it, does it make a sound?
Albert: “Of course it does. What kind of silly question is that? Every time I’ve listened to a tree fall, it made a sound, so I’ll guess that other trees falling also make sounds. I don’t believe the world changes around when I’m not looking.”
Barry: “Wait a minute. If no one hears it, how can it be a sound?”
Albert and Barry are not arguing about facts, but about definitions[. …T]he first person is speaking as if ‘sound’ means acoustic vibrations in the air; the second person is speaking as if ‘sound’ means an auditory experience in a brain. If you ask “Are there acoustic vibrations?” or “Are there auditory experiences?”, the answer is at once obvious. And so the argument is really about the definition of the word ‘sound’.
Another example might be the word “desire”. From the same article:
The trouble is that philosophers often take this “what we mean by” question so seriously that thousands of pages of debate concern which definition to use rather than which facts are true and what to anticipate.
In one chapter, Schroeder offers 8 objections to a popular conceptual analysis of ‘desire’ called the ‘action-based theory of desire’. Seven of these objections concern our intuitions about the meaning of the word ‘desire’, including one which asks us to imagine the existence of alien life forms that have desires about the weather but have no dispositions to act to affect the weather. If our intuitions tell us that such creatures are metaphysically possible, goes the argument, then our concept of ‘desire’ need not be linked to dispositions to act.
Other examples probably exist too.
So could any goal be a “moral” one?
Not really. I hope my examples make it clear that there’s a lot more to normativity (what we ought to do) than morality – there’s also rules of a game, rules of health, rules of etiquette, and the rule of law that we can use to evaluate action independently of morality. Not to mention rules of epistemology, rules of the office, rules of the pool, etc.
I personally think definitions are arrived by social consensus, and like driving on either the left or the right side of the road — it doesn’t matter how we use a word, as long as we all agree to use it the same way. The way I see people use “morality”, I think the clearest concept is talking about evaluating actions based on whether they regard more than just your personal self-interest; rather, taking into account the direct or indirect benefit of others.
So something like Ayn Rand’s Objectivism wouldn’t count as a “morality” (though you can disagree with me on this and still accept my main points). But many standards would. For example, we could imagine these conversations:
-
Aristotle : “Is it better to be a hero or a coward?”
Me: “That depends. What do you mean by ‘better’?”
Aristotle: “I mean that which best displays the virtues.”
Me: “Assuming I know what list of virtues you’re talking about, I would say it’s better to be a hero.”
-
Kant : “Should I lie?”
Me: “That depends. What goal are you talking about when you say ‘should’”?
Kant: “I’m talking about whether lying would make sense when willed as a universal law.”
Me: “I’d say that you shouldn’t lie then, because lying would not make sense when willed as a universal law.”
-
Bentham : “Is it wrong to eat meat?”
Me: “That depends. What do you mean by ‘wrong’?”
Bentham: “I mean it would maximize the predominance of pleasure over pain in the world?”
Me: “I’d say that avoiding meat would maximize the predominance of pleasure over pain.”
-
Locke : “Ought there be safeguards to protect our property?”
Me: “That depends. What goal are you talking about when you say ‘ought’?”
Locke: “I’m talking about whether all people would agree they must do this action when forming a government via social contract.”
Me: “Given your theory that governments exist, at least in part, to protect property, I’d agree that there ought to be such safeguards.”
-
…Though this doesn’t work well with everyone:
Mackie : “Is it morally obligatory to donate 50% of your income?”
Me: “That depends. What does it mean for something to be ‘morally obligatory’?”
Mackie: “Gee, I don’t know. I don’t think the idea of an obligation makes any sense, personally.”
Me: “Then I don’t think there’s an answer to your question, sorry.”
-
Craig : “Is volunteering at a homeless shelter the right thing to do?”
Me: “That depends. What do you mean by ‘the right thing to do’?”
Craig: “Well, I’m talking about what would be endorsed by a just and loving God.”
Me: “I think there’s a problem because such an entity doesn’t exist. Sorry. However, if it’s any consolation, I do think volunteering at a homeless shelter is just and loving.”
-
While this kind of analysis is directly derived from end-relational theory, it is also called pluralistic moral reductionism.
But surely some types of morals are better than others, right? After all, you’re the “Everyday Utilitarian”. So why isn’t utilitarianism the best?
I agree with you in part. As you saw, I just suggested that morality based on God is problematic if God doesn’t exist, as I think is the case. However, I think any morality where we have a consistent list of what is and is not acceptable is a valid use of the term. One actually could even think of creating a consistent list of commands from a fictional or hypothetical God and use that as a morality even if God doesn’t actually exist. In fact, I’d suggest that deontology is pretty much just this – a list of things that cannot be done not based on any particular principles.
There are reasons why I don’t like deontology or virtue ethics, but they’re reasons about my personal desires and criteria for morals, not reasons related to meta-ethics. I think we can judge morals according to additional standards (or meta-morals), but these standards are themselves picked based on our own desires. Moreover, there could be multiple different such meta-morals and the whole thing starts all over again, cascading to infinity, turtles all the way down.
Basically, I do like utilitarianism the best, but it’s not for reasons related to meta-ethics. I’ll save that discussion for "Utilitarianism: A Dialogue".
Can We Have an Ought Simpliciter?
There’s one thing I don’t understand. Imagine that terrorists have rigged my chessboard so that a bomb will level New York City unless I move a pawn five spaces. If I ought to not move my pawn two spaces in order to satisfy the rules of Chess but I also ought not level New York City, which one wins? How do I decide between goals?
That’s a bit unrealistic, I think. You sure about this whole terrorist thing?
Bear with me, this is an important point.
Ok, fine. In Chess, it’s true that you ought not move your pawn more than two spaces. But if the terrorists have hooked up a bomb to the chessboard that will level New York City unless you move your pawn five spaces, perhaps you should ignore the rules of chess on this one. But all this means is that the don’t-destroy-New-York obligation is more important to you than the follow-rules-of-chess obligation. We could, though perhaps not easily, imagine some weird monster that cares more about chess than NYC and chooses the other way. Perverse, yes, but only according to our moral standards, not his.
When we say “we ought not level New York City”, we’re really saying “In order to avoid killing people, we ought not level New York City” or perhaps even “we morally ought not level New York City”. If we insist that “we ought not level New York City” with no goal applied in a floating ought simpliciter form, we should (in order that we analyze meta-ethics correctly) envision some sort of ERROR 404 GOAL NOT FOUND whenever we try to wrap our head around it.
Can’t we just say that morality is what we’re talking about when we say we ought to do something with no goal?
First, this would be needlessly confusing. However, there’s a more important reason – by declaring some goal to be what we mean by an unspecified ought simpliciter is to give that goal a special status of being more important than other goals from a meta-ethical point of view, which is a privilege that cannot be justified from this meta-ethical view. There just isn’t a meta-ethical basis to declare one goal better than another; goals are only more or less important in the context of desires and/or other goals.
Though, if you’re not playing philosopher and instead playing moral advocate, than by all means abuse the persuasion value of an “ought simplicter” and don’t bother to explicitly state the goal in question.
What about an “ought all things considered”?
Another consideration similar to ought simpliciter is that of “ought all things considered”. What we “ought, all things considered, to do” is that which we ought to do when reflecting upon all possible goals. For example, an estranged wife who could get a huge life insurance payout if she kills her husband might understand that she morally ought not poison his coffee even though she pragmatically ought to. Feeling the tug of both of those two goals what ought, all things considered, she do?”
We’re tempted to just say she ought to put morality first and refrain from the poison. But this is just because morality is more important to us, not because of some meta-ethical rule. The idea of an “all things considered” ought to do whatever best satisfies all goals. Why should the moral goal automatically win?
In fact, I think it’s impossible to actually consider all goals, because as I mentioned before there are millions of potential goals, many of which are self-contradictory. This won’t work out. How can we simultaneously satisfy a goal to poison someones coffee and a goal to ensure that coffee remains free from poison?
The Linguistics of Goals
Let’s back up a moment. If moral statements don’t make sense without explicit goals, why don’t we just say them?
For two reasons:
First, the speaker might just be making use of a rhetorical device. The speaker presumably wants his or her command to be followed and will try to do so in the most persuasive way. It just so happens that leaving out the explicit goal and giving a straightforward command is more persuasive. Likewise, by leaving out the goal in question, the speaker can potentially appeal to multiple goals at once, depending on how the speaker chooses to make inferences.
Second, the person may genuinely not understand that they’re communicating based on an explicit goal. There is a kind of culture of categoricalness that gives prominence to certain standards without also giving people the ability to understand on what authority these standards are based. Etiquette is perhaps the most clear example — why exactly do we care so much about people putting their arms on the table? It’s apparently something we do.
Alright, now here’s a stumper. How would you make sense of the statement: “You ought to be moral”.
Did I say two reasons? I guess I meant three. When we look at the statement “you ought to be moral”, we might notice the goal implied is circular – “ In order that you be moral, you ought to be moral.” Why be moral? Because morality demands it, of course!
Sure, this might make no sense. But that’s because you’re expecting an expression of a fact. I’ll let you in on a secret – not all language is meant to be exchanging facts. Consider the sentence “I now declare you husband and wife!”, as uttered by a clergywoman during a wedding for which she is officiating. This sentence has a semantic meaning, describing the action taking place. But it also has a separate, declarative meaning — the actual uttering of that sentence creates a legally significant change that makes the couple in question married.
In 1969, John Searle wrote “Speech acts: An essay in the philosophy of language”, where he found five different roles a sentence can play:
- Descriptive: The sentence describes a state of affairs in the world. (“The man is tall.”)
- Assertive: The statement commits the speaker to a belief. (“I believe Jesus is the Son of God.”)
- Directive: The statement seeks to cause the listener to act in a certain way. (“Please hand me the newspaper.”)
- Expressive: The statement seeks to express the speaker’s feelings. (“Congratulations! You did so well!”)
- Declarative: The statement directly causes the world to change. (“You are now husband and wife!”)
Daniel Boisvert’s theory of Expressive Assertivism suggests that many (but not all) moral statements are like this – they not only assert a moral fact, but also contain expressive and assertive components that show the speaker really cares about a particular moral standard. Likewise, I think it’s also quite plausible that moral statements are directive that differ little from other orders like “shut the front door”.
It’s important to take a moment here and appreciate how moral statements can be purely a descriptive statement of fact, purely declarative/expressive/assertive, and some combination in between. This is precisely how you can get some people thinking there are moral facts and some people thinking that morality is just about expressing our feelings – the reality, I suggest, is a hybrid of both.
Moral Motivations and the Ontology of Morality
But really… Why ought I be moral?
It depends. What goal do you have in mind when you say “ought”?
Ok, wise guy. I see your game. I suppose I meant “why should I feel motivated to be moral if I don’t want to”?
There isn’t any particular reason I could give you. Generally, you’ll feel motivated to be moral if you desire it and identify with it. Most people desire to be a good person according to some kind of definition because of how they were raised as children and brought up in society. Most people like to make other people happy, even if it’s just their friends and family. And those that are so sociopathic as to despise everyone don’t end up having good lives themselves.
But if you’re looking for a rock solid argument that you should give 50% of your income to the most effective charities or something like that, there’s not much I can say. It might make you happier, I guess?
So morals are just desires then?
Well, no. Desires are what motivate us to want to be moral, but are different from morals. For example, it’s easy to talk about a moral system that no one wants to follow. Moral systems are just logical descriptions that exist out there that are true by definition, kind of like mathematics. It’s just true by definition that utilitarians ought to maximize happiness, because that’s what it means to be a utilitarian. Combine that definition with facts about factory farming and all of a sudden it’s true that utilitarians ought to be vegans as well.
We can imagine laws of logic that govern systems that don’t even exist. Morals are kind of like that, descriptions of how people would hypothetically act if they had certain goals and were perfectly rational at following them. We can still evaluate actions based on these morals regardless of what people desire. Even if you don’t care about utilitarianism, you’re still a bad utilitarian if you eat meat.
Ok, maybe they’re not desires, but aren’t they just opinions? How could you make some sort of objective ethics out of this?
Again, no. It’s a fact that “if you want to be a utilitarian, you ought not to eat meat”. Likewise, it’s a fact that “if you want to follow the law, you ought not exceed the speed limit”. These aren’t opinions.
This still sounds awfully contingent and escapable. But I thought morals were categorical and inescapable for all agents. What gives?
Even if people don’t care to follow morals, that doesn’t mean we have to let them get away with it. Instead, this is usually where we express our moral statements all the more forcefully. We can apply our moral standards onto others in a non-contingent and inescapable way, regardless of how they think.
Ok, I get what you’re saying. But when I say that “someone ought not murder”, I feel a really firm conviction and I don’t think it’s just a matter of a standard that I use arbitrarily. Morals are morals. It’s not the case that any standard will do.
Right. But that firm conviction is the “expressive assertivism” we talked about earlier, not a magic force of morality. Regardless of how you feel, no moral force will zap people into compliance. Instead, other people might think differently about morality and there’s not much you can say to them to get them to change their mind. Phillipa Foot discusses how this might be okay in her paper “Morality as a System of Hypothetical Imperatives” .
So am I obligated to do anything?
Yes. You have legal obligations to follow the laws, have epistemic obligations to believe the truth, have deontological obligations not to lie under any circumstance, have utilitarian obligations to donate as much of your income as you can manage, etc… You’re under millions of potential obligations – one for each possible standard that can evaluate actions. Some of these may be nonsensical, like an anti-utilitarian obligation to maximize suffering or an obligation to cook spaghetti for each meal. But all of these obligations are there, even if they’re contradictory. Chances are you just don’t care about most of them.
But I don’t have to do these things if I don’t want to, right?
Sort of. You’re right that there’s no law of physics that forces your compliance against your will. However, some of these standards you might care about, and feel motivated to do them. For others, there might be indirect consequences that you care about. For example, you may not care anything about the law, but you might care about avoiding jail, and therefore feel a legal obligation.
For other standards, you might not care about them or care anything about the consequences that result for not following them. For these, you don’t need my permission to ignore them.
How can I have an obligation if I’m not motivated to follow it?
The answer here lies in what you mean by “obligation”. To me, I’m defining “obligation” as a requirement imposed on you by a standard. This is normally called “moral externalism” if you know what that phrase means. If you don’t, it’s ok.
If your definition of “obligation” implies you’re motivated to follow it, it just means we’re using the two words differently and that’s ok, because a lot of philosophy is (sadly) like that. If you decide that obligations must motivate necessarily by definition, then just realize that it actually isn’t the case that you have obligations-under-this-other-definition even if you have obligations-under-the-definition-I-was-using. Either way, it still makes sense to talk about these requirements imposed on you by standards, regardless of what you choose to call them.
Now I’m confused. Is this view “moral realist” or “moral anti-realist”?
Those of you who don’t know what this means, you can skip this question – you’re not missing much. For others, I know you’ll hate me for saying this, but it depends on what you mean by “moral realist” and “anti-realist”. I agree that when people make moral claims, they might be saying things that are true, which is a very realist claim. However, I disagree that there is One True Moral Standard. I agree that you’re under moral obligation, but I disagree that these obligations have some sort of compelling force independent of desire. Depending on who you talk to, this makes me “realist” or “anti-realist”.
Conclusions
How does this view make sense of moral debate?
Have you noticed that when most people argue about a moral issue, they’re mostly talking past each other? Well, this view offers a perfect explanation for how this happens – people hold two different moral goals, never mention these goals out loud, and therefore never realize that their disagreement is not about a fact, but rather about which goal to apply. It’s easy for someone to say “I think it’s wrong for a woman to have an abortion (in order to obey the commands of God)” and another person to say “I think it’s right for a woman to have an abortion (in order to preserve the autonomy of the woman)” and not notice that one person is talking about God and the other person is talking about autonomy.
Of course, this doesn’t mean that all moral debate has to stop. Instead, we could talk about whether or not God exists and can give commands worth caring about or whether it makes sense to talk about “autonomy”. But it’s also possible that people might just fundamentally value different things and have a legitimate moral impasse.
Okay, so there are some appealing things about your view. But is it just your crazy view, or do other philosophers support it?
This view is very similar (if not identical) to the one expressed by Stephen Finlay in his paper “Oughts and Ends”. It's also not to different from what Peter Singer argues in “The Triviality of the Debate Over Is-Ought and the Definition of ‘Moral’”. Also, this view is pretty similar toRichard Carrier’s Goal Theory.
And while they’re not Ph.D. philosophers, this view is virtually identical to Luke Muelhauser’s Pluralistic Moral Reductionism and not that different from Alonzo Fyfe’s Desirism.
I’m running out of questions. Can you quickly summarize all the reasons why I might prefer your view of meta-ethics over a different one?
My view is that when we say we ought to do something, we’re really implying some sort of goal, standard, or reason for doing that action, even if we don’t say it. Of these potential goals, there might be many – some moral, some not. You should adopt this view because it (a) makes the most sense of all senses of ought, including non-moral ones, (b) explains both the expressive / assertive content of moral statements and the descriptive content of statements better than alternative theories, (c) best makes sense of (the failures of) moral debate, and (d) doesn’t unjustifiably make any particular goal superior to all others.
In short, you ought to adopt my view (in order that you hold true beliefs about meta-ethics).
undefined @ 2016-07-07T04:26 (+1)
From a quick read, your view seems to be similar to that of Gilbert Harman. See his paper "Moral Relativism Defended" and the his part of the book Moral Relativism and Moral Objectivity.