Academic Philosophy

“They’ll Mess It Up!”: Not an Objection to a Moral Theory

I posted some initial thoughts about lying to voters in a recent post. In my book The Ethics of Voting, I argue that most voting is wrongful. In the recent post, I said that voters are kind of like the murderer at the door, and the principles that explain when you can lie to the murderer at the door explain when you can lie to voters. Of course, just when you can lie to murderers at the door is a complicated question, and the same goes for bad voters. You don’t owe murderers at the door the truth, but for strategic reason, you have to be careful in what you say to them. Same goes with voters.

Many of the commenters said that my position can’t be right because people will misapply it in dangerous ways. They are right that politicians will misapply it in dangerous ways. In fact, I bet some politicians who wrongfully lie do so because they think that they mistakenly fall under a murderer at the door-type case. But that doesn’t mean that the principle is wrong. It just means that people tend to mess up the application.

So, I say, “In special circumstances, it’s permissible to lie, if doing so is an effective means to protect the innocent from wrongfully-imposed harms.” Now suppose someone objects:

We are poor judges of consequences. We are prone to thinking we are in exceptional circumstances when we are not. We are prone to misapply principles in self-serving ways. We look for excuses when there are not. If Brennan’s position on lying to voters were widely believed, politicians would probably misapply the principles in dangerous ways. In most real-life scenarios, if a politcians believes himself permitted to lie to voters, he should recognize he is prone to error, and should be extremely skeptical of his conclusion that lying is permitted in this instance.

This objection says that my argument is self-effacing. If people believed it, they would misapply it. While trying to conform to my position on lying to voters, they would act in ways not actually authorized by this position.

This objection fails for the same reason self-effacingness objections usually general. The fact that most people would botch applying a theory does not show that the theory is wrong. So, for instance, suppose—as is often argued—that most people would misapply utilitarian moral standards. Perhaps applying utilitarianism is too hard for the common person. Even if so, this does not invalidate utilitarianism. As David Brink notes, utilitarian moral theory means to provide a criterion of right, not a method for making decisions.[i] Utilitarianism is supposed to explain what makes actions right and wrong. Whether it is useful—given flawed human psychology—for determining on the ground what to do is a different matter. Even if everyone consistently misapplied utilitarianism, this would not show the theory is false.[ii] (As an analogy, consider that certain physics equations explain why the baseball lands where it lands. However, most expert outfielders would never catch a ball if they tried to do so by applying the equations. The equations explain the ball’s path and explain where the balls will land, but do not provide a “decision procedure” for catching balls.)

 

[i] David Brink, “Utilitarian Morality and the Personal Point of View,” Journal of Philosophy 83 (1986): 417-38.

[ii] For an extended argument that moral theory aims to explain rather than to provide a decision-procedure, see Jason Brennan, “Beyond the Bottom Line: The Theoretical Goals of Moral Theorizing,” Oxford Journal of Legal Studies 28 (2008): 277-296.

*As an example of this: Stakeholder theory is lousy as a criterion of right action. It’s not the correct theory of corporate social responsibility. Or, at least, there aren’t good arguments for it. However, because people suffer from moral blind spots, stakeholder theory is a pretty good decision procedure–thinking like a stakeholder theorists tends to reduce people’s blind spots, and makes it more likely they will avoid certain moral errors.

UPDATE: Since we’ve a slew of new Randian readers recently, I’ll use Rand’s theory as an example here. Rand says people should be independent thinkers, not conformists who just follow others slavishly. Suppose, for the sake of argument, Rand’s moral theory is true. However, it turns out, empirically, that many of the top Randians *were* slavish, non-independent, irrational thinkers, who followed Rand in a cult-like fashion. This doesn’t mean that Rand’s moral theory is wrong. Rather, it just meant that her inner circle of fawning disciples and she herself failed to live according to the precepts of her own moral theory. Her moral theory could be a valid criterion of right, but for Rand and her disciples, her moral theory was not a useful decision-procedure. A criterion of right is psychology independent and universal, but a useful decision-procedure is psychology-dependent and individualized, as it depends upon each individual person’s particular psychological flaws.

UPDATE 2: Another way to see the difference. Suppose, for the sake of argument, the correct moral theory is M. But now suppose an evil demon says, “A ha! I’m going to cast  a spell making it so that anyone who believes M will misapply M, thus causing all M-believers to act badly, in a way inconsistent with theory M.” In that world, M is still the correct moral theory–it still explains what makes actions right and wrong. However, you wouldn’t want to teach people M or ask them to use M on the ground. Instead, you’d want to use an independent decision-procedure, something that would tend to make them act in accordance with M, but without consciously thinking about or trying to apply M.

UPDATE 3: There’s a good quotation from Keynes I use in my PPE class about how we should preach free trade as an inflexible dogma to politicians. Keynes’s view was that there are some cases where free trade is not good, but we can’t trust politicians to distinguish the cases where free trade is bad from the cases where it’s good. Accordingly, since it’s rarely bad, and since politicians are dumb, economists should just tell politicians to do free trade no matter what, period. So, Keynes was in effect saying that “Free trade is always good” is strictly speaking false, according to the correct economic theory, but also saying that “Free trade is always good” is a good decision-procedure for politicians, given their lack of knowledge and their flawed psychology.

Tags:
  • adrianratnapala

    So it seems like you are arguing that we have no valid objection to your moral theory except that it is not ethical to put your moral theory into practice.

    • Rachelle

      There are two ways to validly object to a moral theory:

      1. Show that it is internally contradictory (e.g. M1: One ought to x and not x).
      2. Show that one of its premises are false. Example: M2: Rightness consists in maximizing utility. Action x maximizes utility, therefore one ought to x.

      Now if you can show “rightness does not consist in maximizing utility,” then that is a valid objection to a theory.

      What Jason is saying (by the way, he’s on a roll recently! Love him!) is that you can’t say about M3: “One ought not rape.”

      “Ya, but some people will still end up raping so it’s just not practical, so it’s wrong,” No it’s not. People’s inability to fulfill or carry it out does not affect whether the principle is true or false. To determine whether it is false, you must either find a contradiction (contradiction introduction) such that you can negate it (negation introduction, aka. reductio ad absurdum), or you need to show that of the premises is false (to show that the moral theory is unsound).

      • Rachelle

        Edits/corrections/clarifications: My formatting makes it a little confusing, and I left out a few words. In the last paragraph, by “the principle,” what I mean is M3. M3 is the principle (let’s just pretend it encompasses an entire moral theory) that “one ought not rape.”

        TL;DR: People’s inability to fulfill some moral principle is not a valid objection to a principle. I.e., it does not demonstrate that the moral principle is false. What DOES demonstrate that a moral principle is false is showing either (a) that it is internally contradictory, such that you may derive a contradiction (and thereby negate the theory); (b) show that the theory is unsound by pointing out a false premise. E.g. “P1: Rightness consists in minimizing the number of cows on Earth.” P1 is false. Therefore, even if the theory passes the consistency test in (a), it is valid but still unsound.

        I suck at proofreading.

        • TracyW

          Hmm, I think the statement that “ought” implies “can” is a good moral principle, and thus people’s inability to fulfill some moral principle is a valid objection.

          To take another case, let’s say a quadriplegic sees a person drowning, but due to their paralysis can’t save that person themselves, and due to the particular circumstances can’t summon help in time. This is a valid objection to a moral principle that says “You should save anyone you see drowning.”

    • Jason Brennan

      Not quite. It’s ethical to put the theory into practice, but many people will act unethically because when they try to follow the theory, they will mess it up, and fail.

      Analogously: It’s true that in some cases you can make a pre-emptive strike. It’s easy to imagine cases where pre-emption is permissible. But it’s also true that politicians suck at applying the principles of just war. That doesn’t make the principles false. It just means that when politicians try to follow the principles, they tend to fail.

      • greg byshenk

        I suggest that at least part of the problem is that, while it is true that “It’s easy to imagine cases where pre-emption is permissible”, it is impossible (or very nearly so) to -discover- such cases. That is, given an imaginary hypothetical in which you simply assume the facts of the matter to be X, Y, and Z, and the consequences of acting (or not acting) to be A, B, and C, then of course one can rightly conclude that some action (such as a pre-emptive strike) is permissible — perhaps even a duty.

        Unfortunately, in the real world, where one makes decisions rather than theorizing about them, one never has such perfect knowledge. One never has all of the facts, nor can one be certain of the consequences of any given action.

        The same sort of problem infects your earlier examples of evil wizards, and the like. Yes, if one assumes perfect knowledge that person P will perform action Q with bad effects R, then perhaps one can justify lying to them. But absent such perfect knowledge, one cannot know that lying won’t cause them to perform action Q, or that Q might somehow result in (good) effect S, or that lying to P will cause agent T to perform action Q, etc. etc. etc.

        And all of this even before we get to the matter (hinted at already by someone else, I think), that voting is what we use precisely in those cases in which there is dispute about whether some given ends (and means) are good or bad.

  • Suppose I ask someone ‘What is the quickest way for me to get to Old Leake from here?’ He gives me directions for which winding country lanes to follow, when to turn off onto a different lane, what landmarks to watch out for, and so on. A thrid person overhears this and says: ‘That way involves zig-zagging around fields of crops. The quickest way is to fly in a straight line.’

    In one sense, what the third person says is true, namely, flying in a straight line to Old Leake is the quickest way of getting there from here IF you are able to fly. But I am not able to fly. So what the third person says is a false answer to my question, because I was asking what is the quickest way FOR ME to get to Old Leake. Directions are essentially practical: directions which it is not possible to implement are false.

    Moral theories are like directions. They say what people ought to do, and ought implies can. If it is impossible for people, as they actually are, to follow a moral theory, then that theory is false. Any ‘criterion of right’ which is not a practicable ‘method for making decisions’ is thus false.

    I haven’t read Brink’s paper. I just downloaded it, but I will not have time to read it for a while. So I am not responding to Brink’s arguments and it is possible that he has a reply to the argument I just stated.

    • Sean II

      If Brink’s paper actually had an answer to that, then it would be a very important paper indeed.

      This story has been going on for a long time. Ethics has long claimed some bizarre right to be indifferent to what is probable or possible in life.

      You know the old joke about the three professors stranded on the island with a closed can of soup? The physicist says “let’s smash it”. The chemist says “let’s heat it until the lid pops”. The economist says “first, assume a can-opener…”

      Always thought that was unfair, and that the stooge in the punch-line should have been an ethicist saying “the important thing is we should all agree on the desirability of food”.

      Economics has done incredible work in showing us the limits and capacities of human behavior. Meanwhile ethics has been one of history’s most prolific sources of broken dreams and murderous fantasies.

    • MARK_D_FRIEDMAN

      Hi Danny:
      I like your analogy, but would offer a friendly amendment. Moral theories might be expected to offer “directions” in a very wide variety of cases, both real and imagined. It might be that competing moral theories generally agree on what the best (useful) directions are in many or even most circumstances. However, one theory might fail to generate useful directions more frequently than another, and that theory, all other things being equal, is inferior to the competition. This is, I believe, a problem for utilitarianism, which in many cases is indeterminate.

      • I think we might need to distinguish two points. First, different moral theories often issue very different directions in a wide range of cases. Where two theories differ, the directions of one of the theories will be seen as ‘not useful’ (in fact, as immoral) by the other theory. Second, a moral theory may issue directions in some cases that cannot be implemented by moral agents, because the latter do not have the capacities needed to implement them. Any such moral theory, I am saying, is false, because ‘ought’ implies ‘can.’

        For example, in parts of northern Africa and various other regions of the world many people (male and female) hold a moral theory according to which it is right and good to cut away substantial parts of the genitals of prepubescent or pubescent girls. The directions of that moral theory will be seen as ‘not useful,’ or immoral, to say the least, by just about everyone who reads this blog. But that abhorrent moral theory does not fail the ‘ought’ implies ‘can’ test.

        When you say that utilitarianism is ‘indeterminate’ I guess that you mean that it fails the ‘ought’ implies ‘can’ test rather than that it gives abhorrent directions (which it also does).

        • MARK_D_FRIEDMAN

          Yes, utilitarians would say (for example), “raise the minimum wage if this would maximize utility overall.” But, for a variety of reasons, economists can’t tell us whether this step would or wouldn’t. It tells you to fly from from A to B when there is no plane or airport available for doing so.

  • TracyW

    I don’t agree with your analogy. Physics is not like moral theories. We can independently assess the truthfulness of a physics equation. We can’t do so with a moral theory. Basically, what use is a criterion of right, if it isn’t a useful guide for making moral decisions?

    • Jason Brennan

      Physics and moral theories might be different in other ways, but that’s not relevant to the particular analogy I’m using.

      But a criterion of right = an account of what makes actions right or wrong. A method of making decisions = a useful heuristic that makes it likely that you, given your psychology, will make decisions that are correct according to the correct criterion of right.

      • TracyW

        With physics equations, we can independently test them. We can work out where the equations would predict a ball would land, given certain starting conditions, then set the ball off according to those starting conditions and see if the ball lands in the location predicted by those equations. This can be done even if no outfielder in the wall could use those equations in real time to catch the ball.

        But how can we independently test a criterion of right, if it’s not also useful as a method of making decisions?

      • TracyW

        I don’t see how your Rand example supports your point either. In this case, there seems to have been nothing that rendered Rand’s followers incapable of following the moral theory of independent thinking, they just chose not to.
        Obviously, if we don’t have free will, then Rand’s followers were incapable of it, but if we don’t have free will then how can we have a moral theory? You could never tell if you believe a moral theory because it’s right or just because you’re obliged to believe it.

        On the evil demon case, if the evil demon means that the moral theory M can’t be followed, how could we know that M is correct? And of course, since we’re people, we couldn’t teach people to behave in accordance with M as whenever we tried to do this, we’d be misapplying M. If an evil demon made me create errors whenever I tried to solve the physics equations to locate the ball, then I can’t know those physics equations are right.

  • Sean II

    “The fact that most people would botch applying a theory does not show that the theory is wrong.”

    Sure it does, if your goal is to gain moral guidance in the only world which actually exists.

    For example, the only problem with the theory that prices can be controlled with price controls is people botch the application, and instead of producing the same amount of goods for less, they produce fewer goods.

    The only problem with Catholic sexual morality is people like to fuck a lot more than they like having children.

    The only problem with…well, you get the picture.

    This is academic professional deformation at its clearest. The ONLY time one would say “just because this moral theory telling people how to behave is totally irrelevant to actual human behavior doesn’t mean its wrong” is…if one’s purpose was to publish a theory, without having to live by it.

  • Lying to voters is prima facie a bad moral policy. Just because you can justify it under the assumption that voters as a group are akin to a traveling band of murderers doesn’t mean that you’ve made a case for the morality of lying.

    There is no need to defeat with logic what can be defeated with common sense. A moral policy of lying to stupid people for their own good is a cynical, defeatist, elitist moral sentiment that can realistically only ever result in a complete breakdown of faith in the system. Anarchists can suggest that this is a long-run good thing, but then why break the system with a series of noble lies when you can promote anarchy honestly, as many do?

    Maybe I just don’t get it.

  • ppnl

    The fact that the vast majority of people get the principle wrong does not mean the principle is wrong. It means that the principle is utterly useless and even dangerous.

    For example would I torture to save a thousand lives? Maybe. But the circumstances under which torture is justified is so rare that the correct principle here is DO NOT TORTURE! Can there be exceptions to this? Maybe, but the moral hazard is so great both in getting it wrong and in legitimatizing torture generally that the correct principle is still Do not torture. Even in a case where torture may be considered justified it still may cause more harm than good in that it legitimatizes torture.

    I am an atheist and so don’t even believe in an absolute morality. But lying is so corrosive to process, party and person that that the operating principle here must be DO NOT LIE!. Can there be exceptions? Maybe, but the moral hazard is so great that the operating principle must be do not lie.

    Add to this the low barrier politicians have to lying anyway…

    If you ever ran for office I could not trust anything you said and so could not vote for you. See how that works? Lying is corrosive.

    • adrianratnapala

      Sorry for being off topic, but this sentence stuck out:

      I am an atheist and so don’t even believe in an absolute morality.

      This little gem packs in a HUGE concession to C.S. Lewis and his ilk. “Morality is real. Therefore God is the ruler of the universe.” Perhaps can will try to drive a wedge between “morality is absolute” and “morality is real”, but compared to what you conceded, that’s just pissing in the wind.

      • ppnl

        I… don’t follow.

    • Theresa Klein

      This.
      The risk caused by having a principle that says politicians can decide for themselves when it’s ok to lie may be greater than the risk of stupid voters voting for bad stuff.
      The torture analogy is particularly apt, too.

  • ThaomasH

    So the argument is that telling the truth to voters (like honesty generally) is not always the best policy, one should advocate it anyway?

    • ppnl

      Nothing is always the best policy. There are no absolutes. But sometimes the exceptions are so tortuously contrived that they can generally be ignored.

      • ThaomasH

        I agree. And my conclusion for that is that although advising politicians to lie to voters might sometimes be correct, it’s better to recommend honesty.

  • Jon Herington

    Isn’t the more relevant split between ideal and non-ideal theory, not between moral theory and a decision-procedure?

    I took you to be making an intervention in a non-ideal political system (democracy) since, as I think you’ve said before, you’d prefer anarcho-capitalism to a democracy. If you are making a claim about non-ideal political theory, then it seems legitimate to object to your claim about what we ought to do in the unjust meanwhile by noting that it is, given the actual facts, infeasible (or alternatively, that it is only feasible if we assume idealizations which you aren’t entitled to for free)

    • MARK_D_FRIEDMAN

      Good point!

  • Charles

    Hi Jason,
    I think there’s no such thing such as non-ideal theory. It is pointless in deontic reasoning and already implied in consequentialist reasoning. If I ought to lie to voters no matter what, then, “they’ll mess it up” isn’t an objection. If politicians ought to lie to voters on the ground that this would maximise the overall utility, then, it is a huge objection. While there is no such thing as non-ideal theory, there are non-ideal worlds. Those worlds raise questions for consequentialists.
    It seems to me that you are arguing about lies, voters and politicians in a consequentialist fashion. For instance, suppose the murderer is at my door. Case 1: If I don’t tell him the truth he’ll go away and the person I’m hiding is safe; if I tell him the truth, he’ll find and kill the potential victim. Case 2: If I don’t tell him that the potential victim is at my place, he will find another way to kill her; If I tell him the truth, somehow the potential victim will find a way to act in self-defense and kill the aggressor.
    In case 1, you would definitely lie, in case two, I believe you wouldn’t if you somehow managed to know the outcome in advance (am I right?). It seems to me that this is consequentialist reasoning. If I am right, then you would definitely not endorse a rule that would produce such terrible consequences (if it does, I am not sure it does, though).
    I must admit that my example isn’t perfect. It just shows that the rule is not “always lie to the murderer is at your door”, but “always lie to the murderer at your door when you think this is going to save the victim”. The analogy, maybe isn’t perfect because does not really involve misapplication, but suppose we are in the following game: we have to decide whether to implement your rule or not. If we implement it, politicians can either abuse or not their right (or duty) to lie. If they won’t abuse it, they payoff is 5, if they abuse it they payoff is 1, if we don’t implement the rule, the payoff is 3. Suppose that politicians can’t follow the rule without abusing: in this case it remains true that under full compliance we should endorse your rule, but since we can predict that this is never the case, we should not implement the rule. Concluding, in an ideal world, your rule would maximise utility and is, thus, just. In a non-ideal world, your rule would not maximise utility and is, thus, wrong (in a consequentialist fashion).

  • rocinante

    “As David Brink notes, utilitarian moral theory means to provide a criterion of right, not a method for making decisions.”

    No wonder being a philosopher is one of the hardest jobs in the world to get.

  • I wish there would be an RSS feed for Brennan’s posts.

    I did find a link on the BHL site, but it doesn’t actually work. It would be great if the site admins could fix it: http://bleedingheartlibertarians.com/author/jason-brennan/feed/