Tuesday, May 31, 2011

Evading the Categorical Imperative

Now let's return to the first step of Joyce's argument:
  1. Moral language requires categorical imperatives.
Joyce considers three objections to (1.): morality could be institutional, or morality could be founded on hypothetical imperatives, or morality could be relative, rather than absolute.  These are closely related, but Joyce considers them in different places.

"Institutional," in Joyce's usage, means something that one may or may not adopt. The rules of chess are an institution. I must adopt the rules of chess if I want to play in a tournament, but if I am playing against my 7 year-old nephew, I might intentionally make an illegal move (move into check, for instance, so that she can win). As we saw last time, Joyce thinks that practical rationality is the only normative system for which we do not have the luxury of being able to "step outside of the system."

Some philosophers (Joyce mentions Philippa Foot, but says she later abandoned this approach) have proposed that morality is really a system of hypothetical, not categorical, imperatives. Joyce's only response here is to refer back to his discussion of practical reason as the only non-institutional normative system.

Still later, Joyce considers Harman's relativistic view of morality. For Harman, different moral systems are like different frames of reference in physics: events can be viewed from any system, and no one system is privileged over another. If practical rationality is indeed agent-relative, and if morality can be founded on practical rationality, then it makes sense that the resulting system would be relative rather than absolute.

Joyce responds with the Nazi objection. When the Nazis were put on trial, no one thought it necessary to consider the facts from the point of view of the Nazi ethical system. The judges behaved as if their morality was the only correct frame of reference. Thus, Joyce argues, we do not in practice act as if morality were relative.

My first thought here is, "Well, yeah, but in other cases we act as if morality were relative." For instance, knowing that my friend Joe is Jewish, I have no difficulty saying "Joe ought not to eat that cheeseburger," even if I do not feel that I ought not eat one.

More broadly, I would say that our use of moral language is partly cultural and partly instinctive, and that it would be very surprising if it formed a logically consistent system. It is not surprising, then, if we find we have to modify something in our morality in order to make it logically consistent. Isn't this just what moral philosophers have been doing for centuries? Joyce would respond that to let go of the absolute quality of morality results in a system which is no longer recognizably a moral system.

Finally, I want to note that Joyce's Nazi response is merely argument-by-example, and so is pretty weak. My intuition is that most moral systems have some sort of inbuilt relativity. For instance, quite often there are different rules for "us" than there are for "them." Joyce's intuition is different, but he acknowledges that it is just an intuition, and would require much more research to establish with certainty.

To summarize, we can evade (1.) by saying that morality is an institution that we can choose to adopt. Equivalently (?), we can say that morality is a system of hypothetical imperatives of the form, "In order to act in accordance with moral system X, you ought to do Y." What results is a relative, rather than an absolute, morality.

Friday, May 27, 2011

The Trolley Odyssey of Homer

I'm not a big fan of trolley problems but this is a fun introduction to them, with some fascinating twists and cameos by the Simpsons.

Wednesday, May 25, 2011

Practically Rational

Jumping over point (1.) of Joyce's argument, let's take a look at point (3):
(3.) Practical rationality is the only source of statements that cannot be legitimately questioned.

This can be broken into two pieces:

   3a. Practical rationality cannot be legitimately questioned.
   3b. There is nothing else that cannot be legitimately questioned.

Joyce spends some time arguing for (3a), but the basic point is quite simple. The question, "Why should I care about practical rationality?" simply makes no sense. It amounts to asking for a reason I should care about reasons. This is obviously incoherent.

Oddly - given how central it is to Joyce's argument - he says very little about (3b). He merely points out that the argument in the previous paragraph doesn't work when "practical rationality" is replaced by any other normative system. Maybe this is enough, but it seems to me that such an important point needs more than a one-sentence support. (Of course, it may be that I am misrepresenting his argument in making (3b) so central.)

Anyway, it seems we could avoid moral error theory  if there was something other than practical rationality that could not be legitimately escaped. I don't see much hope for this escape route, though.

However, I wonder if we were to take the view of morality that I've been promoting - a social system that imposes constraints on individual behavior - could we argue that, while it is possible to logically step outside the moral system, there is no way to do so practically? That is to say, we are necessarily part of a society, and so are subject to the moral system of those around us, whether we like it or not. (Unless I am alone on a deserted island for the rest of my life - in which case there is, arguably, no need for morality.)

Joyce goes on to analyze practical rationality.

An agent S is practically rational to the extent that she is guided by her subjective reasons.
And,
S has a subjective reason to X if and only if she is justified in believing that S+ (S granted full information and idealized powers of reflection) would advise S to X.

The main take-away from this definition, for my purposes, is that practical rationality is agent-relative.

Thursday, May 19, 2011

The Moral Game

(Note: this post got lost in the Blogger meltdown. Sorry that it is out of order.)

The Prisoners' Dilemma (Mackie's version):

Two soldiers (let's call them Amy and Bob) are on guard at separate posts. They both hear noises indicating that the enemy is coming. They each have to decide: stick to their post, or flee? If they both stick to their posts, they have a good chance of surviving. If they both flee, the enemy will overrun their position, and they might be captured or killed. If one runs and one stays, the one who stays will probably die, but the one who runs has a good chance of getting away while the other guard holds them off.

What should the guards do? Cooperate (i.e., stay) or defect?

We can analyze their options with the help of the following table:

Bob's Choice
Cooperate Defect


Amy's



Choice
Cooperate (2,2) (4,1)
Defect (1,4) (3,3)


The entries in the table are the preference Amy and Bob respectively assign to each outcome: (1,4) indicates this outcome is the best for Amy (1) and the worst for Bob (4).

If Amy doesn't know what Bob is going to do, she will reason like this: "Suppose Bob decides to defect. Then my choices are to cooperate and probably die (4), or defect (3) and run the risks of the enemy overrunning our position. So I should choose to defect.

Now suppose Bob decides to cooperate. If I cooperate too, then we have a good chance of surviving (2). But if I defect, I have an even better chance of surviving (1). So I should decide to defect."

Bob reasons the same way, of course, so both decide to defect. Both have chosen rationally, but the outcome is  sub-optimal. From a global perspective, both of them cooperating is clearly preferable.

This simple example from game theory helps us understand how moral systems might have evolved. Individuals with a disposition to cooperate can end up with a better chance of surviving than individuals acting purely from their own self-interest. This is the hook that evolution can latch onto to promote cooperation.

A more detailed game theoretical analysis shows that when the situation is repeated many times - rather than the one-off situation described above - cooperation can actually be rationally justified.

And this is just the situation we find ourselves in. Every day, we make thousands of decisions whether to cooperate and do what morality dictates - keep that promise, pay for that coffee, obey that traffic signal - or to defect.

And most of us, most of the time, decide to cooperate. But is this rational behavior?

Tuesday, May 17, 2011

Humean's Fake: Joycean Error Theory

I've been reading The Myth of Morality, by Richard Joyce. Joyce is a proponent of moral error theory: He thinks that when we use moral language we are simply in error, because there are no such things in the world.

His argument centers on the idea of the categorical imperative. A hypothetical imperative has the form, "If you want to achieve A, you ought to do X." This sort of statement is uncontroversial: there is no doubt that such statements are sometimes true. Kant thought, and Joyce agrees, that categorical imperatives are central to moral thought. Categorical imperatives make the claim, "You ought to do X," without any "if..." clause. Such claims are absolute.

As I understand it, Joyce's argument runs as follows. (He lays out his argument very nicely, but this is my formulation of it, not his.)

  1. Moral language requires categorical imperatives.
  2. Categorical imperatives cannot be legitimately questioned.
  3. Practical rationality is the only source of statements that cannot be legitimately questioned.
  4. But practical rationality cannot provide a basis for (moral) categorical imperatives. 
  5. Therefore, moral language is in error.
(2.) is simply a consequence of the definition of a categorical imperative. Joyce gives each of the other points careful consideration. In the next few posts, I will look at how he deals with attempts to deny each of them.

Tuesday, May 10, 2011

Boo on You! Non-Cognitivism and the Moral Instinct

Non-cognitivism in ethics is the idea that moral claims are really just expressing the speaker's attitude towards something. "Abortion is wrong," for example, amounts to "Boo on abortion!" The attitude expressed is, roughly, "I approve/disapprove of this and you should, too." But the moral claim is itself the expression of the attitude - it is not the claim that one has that attitude. That is, a moral claim is non-propositional: it doesn't have any content that is capable of being true or false.

Cognitivist philosophers respond that we seem to think moral claims have propositional content. They support this by considering the way we use moral language. Consider the claim, "If killing animals is wrong, one shouldn't eat meat." It doesn't make any sense to translate this as, "If boo on killing animals!, one shouldn't eat meat."

I think the cognitivists are probably right: when we make moral claims, we think we are making a statement that is capable of being true or false. (Whether we are right to think so is another matter.) But it seems to me that once they have considered and dismissed non-cognitivism, cognitivist philosophers forget all about the non-cognitive aspect of morality.

In the last post I gave a sketch of morality as an evolved social system that limits the actions of individuals by means of social pressure. If this view is at all correct, it is easy to see why there is a large non-cognitive component to moral claims. A strong expression of disapproval of doing X, and the resultant peer pressure to refrain from doing X, lies at the root of the moral system. (Together with the positive version: expressing strong approval of an action.)

It's harder to understand why there might be a cognitive component to morality. It seems like evolution could have given us a purely emotional response that would serve to enforce the group norms.

Let's see if the linguistic analogy helps us here. Consider "It's wrong to say, 'I am going the store to.'" Here "wrong" is not used in the moral sense, but it plays a similar role. It certainly expresses disapproval. But it also implicity invokes the rules of grammar: "It's wrong to say, 'I am going the store to,' because in English the preposition comes before its object." Notice that in grammar, the rules arise (originally) as generalizations about actual usage. They are not imposed by some linguistic authority - though various institutions (dictionaries, textbooks) might take on that authority at a later time.

It seems to me that moral claims are also two-pronged: they express approval/disapproval, but they also implicitly invoke general rules that it is assumed are accepted, or at least known, by all. "Murder is wrong" thus has the cognitive content "Murder is a violation of the generally accepted rules of behavior."

Saturday, May 7, 2011

The Moral Instinct

I've been trying to educate myself about ethics, metaethics, and moral philosophy, and so far I feel like I haven't encountered an approach that really makes sense of it all. But I thought I'd try to take a first pass at putting down some of my thoughts on the subject.

Let's start by asking what sort of beast morality is. I come up with something like the following:

A1. A moral system is a social structure that imposes limits on the actions of individuals who are part of a given social group. These limits are enforced by a system of rewards and punishments. Rewards include praise and increased social status, punishments range from shame to shunning to ostracism to death.
Thus, morality acted as a legal system, back before laws and punishments were formalized and written down. But morality is more than just a system of rewards and punishments, it is internalized through feelings of pride, guilt, etc.


 If we ask what is the origin of such moral systems, the answer seems pretty clear: evolution. Humans, like other primates, are highly social animals. Our survival depends to some extent on our ability to cooperate with each other. Just as we have evolved an innate capacity for language, we have evolved some innate capacity for moral behavior: not just the external rules of the system, but the internal emotions that result when the rules are obeyed or disobeyed. Presumably, this sort of behavior gave a better survival rate, so that groups with stronger moral institutions (and containing individuals with stronger moral feelings) out-competed other groups. Let's summarize this as

A2. Morality evolved as a way of subordinating the interests of the individual to the interests of the group.

Clearly, different cultures have implemented widely varying sorts of moral systems. I take it that what we have evolved is a basic instinct for conforming to the group morality. The specific content of that morality differs from culture to culture, and is learned.  Here the language analogy is useful again: we have some innate capacity for language, but the specifics of vocabulary, grammar, etc., are learned.

All this seems rather obvious and straightforward. But it is already enough to answer some of the big questions that moral philosophers ask. In fact, it makes the search for a true account of morality look rather pointless. Why, given A1 and A2, would we expect any one "correct" moral system? That's like asking what's the correct grammar for a language to have, or what's the ideal legal system.

Here the language analogy seems to break down, however. When we hear, "Throw your father down the stairs his hat," we think, "How charming!" but when we hear of a practice like female genital mutilation, we say, "That's just wrong."

But there is a reason our response to other moral systems is different from our response to other languages. That is just what a moral system is: a system of deciding what is right and what is wrong. So we should not be surprised that we have an instinctive - maybe even irrational - response to other moral systems.