Current Events

Is Belief In the Moral Parity Thesis Dangerous? (Probably Not. But Even If So, So What?)

Suppose that I come to believe, stupidly, that taking caffeine is dangerous. I announce henceforth that I will lock any people I catch drinking coffee in my basement for 30 days as a punishment. I see you walking out of Starbucks and try to grab you. You fight back, and, in the struggle, injure or kill me. What you did was permissible self-defense.

The “Moral Parity Thesis” holds that nothing magic happens if the would-be kidnapper is a cop rather than a private civilian. *If* (and this a small if) drug prohibition laws are unjust and illegitimate (as they obviously are), then from a moral point of view, you may defend yourself from a cop trying to arrest in the same way you could defend yourself from me. Given that cops are armed and dangerous, it may not be strategic to do so, but morally, it’s permissible.

One putative objection to the Moral Parity Thesis is that it is dangerous, because people will misapply it. The objection goes:

We are poor judges of consequences. We are prone to vengeance and anger.  If Jason Brennan’s moral parity thesis were widely believed, people would probably misapply the principles in dangerous ways. In any real-life scenario, if a person believes himself permitted to kill a cop, congressperson, or president, he should recognize he is prone to error, and should be extremely skeptical of his conclusion that violent resistance is permitted in this instance.

In effect, this objection says that my argument is self-effacing. If people believed it, they would misapply it. While trying to conform to my theory of defensive action, they would act in ways not actually authorized by this position.

This objection is closely related to a mistaken objection people sometimes raise against the view that people may break unjust laws. I was discussing this point with a law professor a few years ago, when the professor said, “So you think people may break unjust laws?”

“Sure,” I responded, “And indeed I hope they do, if they can get away with it.”

“But surely you can’t mean that a person can break any law just because he thinks it’s unjust. That’s a license for anarchy!”

“Right,” I responded, “But notice the difference between what you said and what I said. I’m saying that some laws are in fact unjust—that there’s an independent moral truth about whether laws are just or not. When the law is in fact unjust, then there is no duty to obey it. That’s not the same thing as saying that you can break any law because you believe it’s unjust.”

“But might someone be mistaken? Don’t they have to judge for themselves?,” he asked.

“Of course,” I said, “But that’s a problem for every theory. Every moral theory says something like, ‘Under conditions A you must do X; under conditions B you must not do Y; etc.’ The theories don’t say ‘Do X when you judge you’re in A’—after all, you might be mistaken or negligent or reckless in making that judgment. Instead, they say, ‘Do X when you are in fact in A.’ Notice the difference.”

The general theory of defensive action says that we can use defensive actions under certain conditions. Call those conditions C. The Moral Parity Thesis says C are also sufficient conditions for using defensive action against government agents acting ex officio. The Special Immunity Thesis denies that, and says defensive action is permissible against government agents under a much more tightly constraint set of conditions, or not at all.

The Dangerous Misapplication Objection fails for the same reason self-effacingness objections usually general. The fact that most people would botch applying a theory does not show that the theory is wrong.

So, for instance, suppose—as is often argued—that most people would misapply utilitarian moral standards. Perhaps applying utilitarianism is too hard for the common person. Even if so, this does not invalidate utilitarianism. As David Brink notes, utilitarian moral theory means to provide a criterion of right, not a method for making decisions.[i]  Utilitarianism is supposed to explain what makes actions right and wrong. Whether it is useful—given flawed human psychology—as an algorithm or tool for people on the ground to make decisions is a different matter. Even if everyone consistently misapplied utilitarianism, this would not show the theory is false.[ii]

If that seems weird, consider, as an analogy, that certain physics equations explain why the baseball lands where it lands. These physics equations capture the truth of the matter. However, most expert outfielders would never catch a ball if they tried to do so by “applying” the equations on the ground. Unless they are math wizards, doing so is too hard and too slow. So, the “decision procedure” they should use on the ground for catching the ball is whatever psychological mechanism is most likely to get the to the ball. The equations explain the ball’s path, but do not provide a “decision procedure” for catching balls.

Lying, deception, sabotage, destruction, and violence are dangerous. We should be self-aware and recognize that we are prone to error. We should be aware that defensive actions are morally risky. We should also be aware of our own epistemic uncertainty. Consider again this example from chapter 1. Suppose Ann comes across what appears to be a police officer about to execute someone. Should she shoot him to stop him?

It’s strange, even in the US, for police officers to just try to murder innocent people. Though it seems like that’s what the police officer is doing, Ann should be give the officer the benefit of the doubt, and presume that there must be some good reason he’s doing what he’s doing. She should thus not kill him, at least not until she’s more certain or has more information.

This new objection—let’s call it the Epistemic Uncertainty Objection—gets something right. However, upon reflection, it doesn’t do the work defenders of the Special Immunity Thesis need it to do.

Recall that one of the conditions for defensive violence was that the defender had to have a reasonable belief that the defensive action was necessary to prevent the purported aggressor from committing a severe injustice or harm. We can reasonably debate just how much epistemic justification is required for a belief to be reasonable, or what beliefs are reasonable in different situations. That’s not to say this question is highly controversial. Some beliefs are obviously reasonable and some are obviously not, while some are in the middle area of, well, reasonable debate about what’s reasonable. All that aside, the point remains that so long as the defender’s belief is reasonable, and the other conditions for defensive action are met, then defensive action is permissible. The defender does not need to be certain. The defender can have reasonable doubts that defensive action is necessary. But if the defender reasonably believes defensive action is necessary (and the other conditions are met), she may use defensive action.

Now, there’s an interesting question here about what we should infer when we see government agents doing something that appears to be unjust. While there is rampant police abuse in the United States, it would be absurd for me to shoot a cop as soon as he pulls over a drunk driver. Most likely, the officer will not use excessive force or violence against the drunk driver, but will act in a professional and diligent manner. On the other hand, if I see him drag the driver out, knock him down, and then start pummeling the driver with the barrel of his gun, then while it’s possible the officer had to do that to protect himself, most likely he’s engaging in excessive violence and is a rightful target of defensive violence himself. In some cases, we have reasons to presume that what the government is doing is unjust even if we lack other details. For instance, since our best evidence indicates that about 90% of drones strikes kill innocent people, a person might feel free to shoot down any drone she sees. Sure, some drone strikes might be just, but statistically those are rare.

In the end, these points illustrate no principled difference between government and non-governmental agents. At most, the point here is that when we form beliefs about what others are doing, we have to rely on statistical trends and background information. It’s possible that we might encounter situations in which two people seem to be doing the same thing, something that looks potentially unjust, but, based on our background knowledge about those people or people like them, we might infer that it’s more likely one of them has a justification for what she’s doing and the other doesn’t. Suppose A) I turn the corner and see a police office beating someone with a baton. Suppose in another scenario, B) I turn the corner and see an ordinary man beating another man with a bat. Now, it’s statistically more likely that cases like B are instances of injustice than cases like A—it’s more likely that a police officer beating a person is justified in doing so than a random person. A person considering defensive action has to take into account these sorts of things when forming beliefs about whether defensive action is necessary. But, in the end, all this shows, at most, is that in some cases government agents who seem like they might be doing something unjust are less likely to actually be doing something unjust than civilians doing the same thing. But that’s not a principled difference, and it’s compatible with the Moral Parity Thesis.

In summary, in the real world, when we think defensive violence or other defensive actions are justified (according to the argument presented here), we should be extra-cautious and self-skeptical. However, none of this shows that defensive actions are always forbidden, or that government agents enjoy special immunity against defensive action.

All that said, I wonder if this objection mostly has the problem backwards. The worry here is supposed to be an epistemic problem: that people will misapply the theory, mistakenly resisting government agents even when they should not. On the contrary, it seems more plausible that citizens are more likely to engage in wrongful obedience than they are to engage in wrongful resistance. The typical person is a conformist, deferential coward.

Consider: Many experiments show that we are biased to conform our opinion to that of the majority (or that of whatever group we want to be part of), even when it is irrational to do so. Perhaps the most famous example of this is the Asch experiment. In Asch’s experiment, eight to ten students were shown sets of lines in which two lines were obviously the same length, and the others were obviously of different length. They were then asked to identify which lines matched.

In the experiment, only one member of the group was an actual subject; the rest were collaborators. As the experiment proceeds, the collaborators begin unanimously to select the wrong line.

Asch wanted to know, how would the experimental subjects react? If 9 other students are all saying that lines A and B, which are obviously different, are the same length, would subjects stick to their guns or instead agree with the group? Asch found that about 25% of subjects stuck to their own judgment and never conformed, about 37% of subjects caved in, coming to agree completely with the group, and the rest would sometimes conform and sometimes not.[iii]

Or consider the Milgram experiment.[iv] This experiment seemingly shows that we will obey orders, even when we believe what we are being ordered to do (deliver seemingly life-threatening shocks to a fellow experimental subject) is immoral, and even when we want to disobey. We respond to social pressure by caving in and becoming cowards.

During the experiment, Milgram brought in two “subjects”, one of whom was secretly an actor. He assigned the real subject to the role of teacher, and the actor the role of learner, and then told them they were taking part in an experiment on memory. The teacher was told to ask the learner a question. If the learner made a mistake, the teacher was to punish him by delivering an electric shock. (The apparent shocks went up in 15-volt increments, and also had labels such as “danger: intense shock” and “XXX” at the extreme end.)

The teacher, after observing the learner being hand-cuffed to a chair and hooked up to electrodes, was taken to another room and told to begin the test. The learner/actor began giving incorrect answers according to a script. In some versions of the experiment, the learner would scream, or complain about his heart condition. In all version, the learner would eventually stop answering the questions altogether. For all the teacher knew, the learner had passed out, or died. If at any point the teacher expressed concern or said he wanted to stop, a labor director, following a script, would tell the teacher, “The experiment requires you to continue” or “please continue”. The lab director also ordered the teacher to treat non-responses as incorrect answers and to deliver a higher shock.

In most version of the experiment, almost all teachers would administer high level shocks, despite showing clear and obvious discomfort over the fact that they were torturing another human being. Once the learner stopped responding, 65% of subjects/teachers kept going, sending for all they knew what were ever more lethal shocks into a possibly unconscious fellow subject.[v]

Most subjects showed obvious discomfort with what they were doing. Some laughed or cried; some became hysterical. Many asked the lab director who was responsible; the lab director would quietly assure them that he was. Only a minority quit and refused to deliver the highest voltage shock. During the debriefing afterwards, Milgram or his director asked subjects why they didn’t stop. Many subjects showed surprise, as if it hadn’t occurred to them that they could just stop.

These are just two major experiments, of course. But in general, it seems that psychology shows that citizens tend to err on the side of wrongful obedience rather than the side on wrongful resistance.[vi] From the Asch and Milgram experiments to contemporary work on intergroup bias in political psychology,[vii] we see that citizens are generally conformists who do what they are told and try to avoid conflict. Thus, to whatever extent these epistemic concerns push against my view, they push even harder against the other side. If anything, proponents of the Special Immunity Thesis should be cautious in expounding their views. People are far more likely to support and obey a Hitler or a Stalin than they are to stand up to a cop when they should back down.

[i] David Brink, “Utilitarian Morality and the Personal Point of View,” Journal of Philosophy 83 (1986): 417-38.

[ii] For an extended argument that moral theory aims to explain rather than to provide a decision-procedure, see Jason Brennan, “Beyond the Bottom Line: The Theoretical Goals of Moral Theorizing,” Oxford Journal of Legal Studies 28 (2008): 277-296.

[iii] Asch (1955), 37.    Asch (1952), 457-8.

[iv] Milgram, Stanley (1963). “Behavioral Study of Obedience”. Journal of Abnormal and Social Psychology 67 (4): 371–8. The next few paragraphs are an edited version of my summary (with David Schmidtz) from Schmidtz and Brennan 2010, 213-4.

[v] When Milgram asked in 1963 for predictions, Yale undergraduates predicted an obedience rate of 1.2%. Forty Yale faculty psychiatrists predicted a rate of 0.125%.  See Blass (1999), 963.

[vi] More citations

[vii] citations

Share: