Democracy, Academic Philosophy

The One-Boxer Argument for Voting

Jason Brennan’s excellent The Ethics of Voting dispatches a number of familiar arguments for a duty to vote and provides grounds for a duty to vote well or not vote at all. I’ve been mulling over an argument for voting that J doesn’t address (probably because it is crazy). But let me try to work it out and see what you think. It’s complicated, as the argument is based on Newcomb’s Paradox and resolving the paradox in favor of the “one-boxer” position. As such, I’ll call this The One-Boxer Argument for Voting. If you get to the end, I think you’ll find the conclusion interesting.

I. Newcomb’s Paradox

Robert Nozick made Newcomb’s Paradox famous, so let’s begin with his description of it [skip to II if you know the paradox]:

Suppose a being in whose power to predict your choices you have enormous confidence. (One might tell a science-fiction story about a being from another planet, with an advanced technology and science, who you know to be friendly, etc.) You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.

There are two boxes, (B1) and (B2). (B1) contains $1000. (B2) contains either $1,000,000 ($M), or nothing. What the content of (B2) depends upon will be described in a moment.

(B1) {$1000}                                               (B2) {$M or $0}

You have a choice between two actions:

(1) taking what is in both boxes

(2) taking only what is in the second box.

Furthermore, and you know this, the being knows that you know this, and so on:

(I) If the being predicts you will take what is in both boxes, he does not put the $M in the second box.

(II) If the being predicts you will take only what is in the second box, he does put the $M in the second box.

The situation is as follows. First, the being makes its prediction. Then it puts the $M in the second box, or does not, depending upon what it has predicted. Then you make your choice. What do you do?

There are two plausible looking and highly intuitive arguments which require different decisions. The problem is to explain why one of them is not legitimately applied to this choice situation.

Now, before we look at the arguments for (1) or (2), figure out where your intuitions lie. I go with (2). I only take one box, and I can’t shake the intuition. I have tried to shed my one-boxerism but it is no good. As Nozick says, “To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that there people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

Nozick continues:

First argument. If I take what is in both boxes, the being, almost certainly, will have predicted this and will not have put the $M in the second box, and so I will, almost certainly, get only $1000. If I take only what is in the second box, and so I will, almost certainly, get $M. Thus, if I take what is in both boxes, I, almost certainly, will get $1000. If I take only what is in the second box, I, almost certainly, will get $M. Therefore I should take only what is in the second box. 

The foregoing is the argument for being what is now called a “One-Boxer.”

Second argument. The being has already made his prediction, and has already either put the $M in the second box, or has not. The $M is either already sitting in the second box, or it is not, and which situation obtains is already fixed and determined. If the being has already put the $M in the second box, and I take what is in both boxes I get $M + $1000, whereas if I take only what is in the second box, I get only $M. If the being has not put the $M in the second box, and I take what is in both boxes I get $1000, whereas if I take only what is in the second box, I get no money. Therefore, whether the money is there or not, and which it is already fixed and determined, I get $1000 more by taking what is in both boxes rather than taking only what is in the second box. So I should take what is in both boxes.

And this is the argument for being what is now called a “Two-Boxer.”

I won’t address this point here, but there are deep similarities between Newcomb’s Paradox and the Prisoners’ Dilemma. For a time many thought, following David Lewis, that the Prisoners’ Dilemma just was a Newcomb problem, but now most think (though I could be wrong) that neither is a subset of the other. At the very least, Newcomb choices aren’t simultaneous, so its not a normal form PD game. So let’s set the PD aside for the moment.

The crazy thing about being a One-Boxer is that you are acting based merely on an expected utility calculation knowing full well that your particular choice can no longer change the outcome. In other words, you act an expected utility calculation in full knowledge of the fact that there is no causal connection between your present action and how much money is in B2. And yet, by picking the one box you’ll get more money.

To be a One-Boxer is to affirm the priority of evidential expected utility calculations (which vindicate One-Boxing) over causal expected utility calculations (Two-Boxing) when they conflict. They almost never do, but they can conflict in principle. I won’t draw out the idea further here except to say you can characterize the conditional probabilities in accord with how you interpret the dependence relations between your choices and the box contents, which as we’ve seen can easily go either way.

II. Newcomb’s Voting Paradox

Now, let’s assume that the Predictor won’t put any money in B2. Instead, he’ll place either President Obama or President Romney in the box (but not both, not neither, and not anyone else, and they’ll be fully alive and rationally competent). Suppose you have a clear preference for one or the other. If the Predictor predicts you will only take B2, then he will put your preferred candidate in the box. But if the Predictor predicts you will take both boxes, he will put the other candidate in the box. B1 contains the net utility/disutility you get from voting, minus the utility you get from having either Romney or Obama win. Many people, I think, get net utility from voting, but you might not. Nonetheless, assume you do.

Now, what should you do? On One-Boxerism, you should take B2. After all, you’ll get your preferred candidate in office, even though there’s already a fact of the matter about whether it is Obama or Romney in B2. If you engage in Dominance reasoning, you’ll obviously take both boxes, but that will leave you with the other guy. Yuck.

OK, so here’s my conjecture:

(1) If you are a voter whose inclination (i) to vote and (ii) to vote for either Obama or Romney is a reliable indicator of the outcome AND,

(2) If you have justified beliefs about (1) AND,

(3) If on your view, the outcome matters enough to exceed the disutility from voting, if you get net disutility from voting AND,

(4) If you have no countervailing moral reasons (not counted in your disutility) to vote, THEN:

(C) Despite the fact that you will have no causal impact on the outcome, and that you get mild disutility from voting, it is rational for you to vote.

To put it simply, if you have reason to think that your inclination to vote for your candidate is a bellwether for whether he will be elected, and you care a lot about the outcome, then it is rational for you to vote for your candidate, in the absence of countervailing moral reasons and despite the disutility you get from voting.

III. Voter Rationality May Not Depend on Making a Causal Contribution to Outcomes

Now, if we’re Two-Boxers, the argument flatly fails. In fact, Two-Boxing will show that in this case it is positively irrational to vote, since you could almost certainly do something that generates more utility than voting, even if you will feel mildly pained at having not voted.

However, if you’re a One-Boxer, then I have described a situation in which it would be rational for a person to vote.

And here’s why the result matters: when a libertarian/cynic says that your vote doesn’t count because it can’t affect the outcome, you can respond: but that doesn’t matter, what matters is whether my voting is correlated with the outcome. All that matters is that the following hold: IF I vote, THEN the election goes my way EVEN IF there is no causal connection between the two. So technically speaking, if you’re a One-Boxer, it is wrong to tell people not to vote because their vote has no causal power. You have to show instead that a vote does not correlate with the outcome.

Now of course, that’s hilariously easy to do, depending on how broadly you understand the idea of being a “reliable indicator” in (1). To figure out whether you’re a reliable indicator you have to, at the very least, have voted a large number of times in elections that are roughly similar. And, after all, half the country votes for the loser, so they’re not going to count as reliable.

That said, we might interpret reliability in terms of the correlation between a voter’s dispositions and the outcome rather than her actions. In that case, if the voter’s dispositions were a reliable indicator of the outcome, then she might be justified in acting on them (though of course we’d have to consider the probability that dispositions are converted into choices).

Further, who would these voters be? A swing voter is the most obvious candidate, but they’re dumb at politics by and large so they won’t satisfy condition (2) for that reason alone. However, suppose that you’re partisan, well-informed by lazy. In the case, you might form a justified belief that if you’re motivated enough to get off your butt to vote, then your candidate will probably beat the other guy. In that case, it is rational for you to vote.

And yet even in this case, condition (2) is pretty hard to satisfy. How could one justifiably that she is a bellwether? I think there may be cases where you can have such a justified belief, but they will be relatively rare.

IV. The Wacky Conclusion – One-Boxer Voters Might Be Rational

So, in conclusion, if you resolve Newcomb’s Paradox in favor of One-Boxing, then it is not that case that X should not vote because her vote makes no causal contribution to the outcome. Instead, X should not vote if her vote does not correlate with the outcome she desires.

I think this is a consequential result. Even though the argument justifies almost no one in voting, it shows that a common claim, namely that your vote makes no causal difference, only counts against voting directly if you’re a Two-Boxer. If you’re a One-Boxer, a different claim is required, namely that your vote does not correlate with the outcome.

————-

*I thank super-smart fellow philosopher and decision theorist Ryan Muldoon for helping me think through some of the issues here, though all errors in the post are my responsibility alone. I also thank a dear friend for telling me that I would have to be insane to post this argument.

Share: