Many Facebook friends have linked to this post by Wendy McElroy.

She begins,

In entry-level philosophy class, a professor will often present a scenario that seems to challenge the students’ perspective on morality.

The argument runs something as follows: “The entire nation of France will drop dead tomorrow unless you kill your neighbor who has only one day to live. What do you do?”

Or “You could eliminate cancer by pressing a button that also kills one healthy person. Do you do so?”

Yep, so far, so good. It’s an interesting question about whether an innocent person’s right to life could ever be outweighed by other considerations. Would it be okay to kill an innocent person to avoid a disaster? That seems like something someone should ask. (Perhaps philosophers.)

McElroy disagrees. She says,

In reality, the questions are a sham that cannot be honestly answered. They postulate a parallel world in which the rules of reality, like cause and effect, have been dramatically changed so that pushing a button cures cancer. The postulated world seems to operate more on magic than reality.

Because my moral code is based on the reality of the existing world, I don’t know what I would do if those rules no longer operated. I presume my morality would be different, so my actions would be as well.

McElroy says this, but I doubt she really believes it. Consider:

  • Star Wars posits an parallel world in which the rules of reality, like cause and effect, have been dramatically changed. The postulated world operates more on magic than reality.
  • Same with Lord of the Rings.
  • To some degree, same with Atlas Shrugged, which involves significant science fiction.
When I watch Star Wars, I have no trouble evaluating whether the light side of the force is the good side or bad side. I have no trouble evaluating whether the dark side of the force is good or evil. All this, even though I developed my moral code in the real world. Same with McElroy. She knows that the dark side of the force is the bad side, even though the force doesn’t exist.
Moral rules apply to all sorts of unrealistic situations. And we have no special difficulty apply them in unrealistic cases. E.g.,
  • Was it okay for the IF to trick Ender into exterminating an alien race, which the IF justifiedly but incorrectly perceived to be an existential threat?
  • Would it be okay to feed your baby to Godzilla for fun?
  • Should Boromir have tried to take the ring from Frodo?
  • Was it okay for Raistlin Majere to strive to become a god and to overthrow Takhisis?
  • Should Magneto try to organize all the mutants against humanity?
All of these issues involve magic and bending the rules of reality, yet insofar as these questions are difficult to answer (and at least two of them are easy), the issues of magic/non-magic and realism/non-realism aren’t what make them difficult.
I presume McElroy, if she watches/reads sci fi or fantasy, doesn’t do so as a moral agnostic. Rather, she judges away, easily applying her moral principles to new, unusual cases involving sci-fi and magic.
N.B. The remainder of her essay may be right or wrong. I’m just criticizing the beginning of it here.
Print Friendly
Tagged with:
 
  • Hume22

    I wonder what type of insight our intuitions are capable of providing in these situations, if we are to assume that our intuitions are determined by the forces of evolution. Do these hypotheticals, designed to operate on our intuitions, thereby help us understand moral reality? If intuitions are determined by evolutionary forces, why should we think that this is so? Why should we think that extravagant hypotheticals the circumstances of which never arise in the process of intuition formation nevertheless provide us with moral insight and understanding *precisely because of the how they bring forth certain intuition responses*?
    I have not thought enough about these issues, so I am just spitballing here. I have some sympathy when Scanlon says that we need to distinguish between using the method wisely as opposed to not using it at all (see interview of Scanlon at http://www.the-utopian.org/T.M.-Scanlon-Interview-1). But at the same time, I do wonder what basis we have for relying on intuitions as appropriate guides to situations as conjured up by the imagination of some philosophers. Perhaps I am still skeptical of reflective equilibrium.

    • http://www.sandiego.edu/~mzwolinski Matt Zwolinski

      I had the same thought. I don’t know if this is McElroy’s concern or not ( I doubt it), but if one takes a naturalistic (Humean?) approach to moral intuitions, then there seems to be good reason for doubting their epistemic value in bizzarre situations.
      In other words, if one views intuitions not as innate guides to objective moral reality, but as responses that we have evolved to have in response to certain kinds of important situations in order to promote survival, group cohesion, etc., then there seems to be good reason to doubt that our moral intuitions about cases utterly unlike those we have evolved to cope with have very much value at all as far as figuring out what the right thing to do is.
      This is out of my area of expertise. But I take it that this is the sort of concern that Josh Greene is trying to evoke in his paper on “The Secret Joke of Kant’s Soul”? http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-KantSoul.pdf

  • http://independent.academia.edu/DannyFrederick Danny Frederick

    I have not followed the link, so I have only seen what you quote from McElroy. My problem with it is that her conception of reality is parochial. First there is technology. Only a few decades ago people would have regarded as science fiction or magic what today we can actually do at the push of a button (or the click of a mouse). Second, there is science. One-hundred-and-fifty years ago, scientists would have claimed to be impossible what today relativity theory tells us is necessary. Her objections SEEM to be the product of a closed mind.

  • http://www.facebook.com/profile.php?id=1287114893 Rick Schaut

    Jason,

    “I presume McElroy, if she watches/reads sci fi or fantasy, doesn’t do so as a moral agnostic. Rather, she judges away, easily applying her moral principles to new, unusual cases involving sci-fi and magic.”

    Does it not occur to you that one can easily apply moral principles to a work of fiction precisely because it is a work of fiction and that there are no real consequences as a result?

    This is such a facile argument, that I’m surprised you’ve even chosen to voice it.

    • Jason Brennan

      Sorry, I don’t understand the force of your objection. Any interpretation I put on it make it seem obviously wrong, so I must not get what you’re going after.

  • TWAndrews

    The difference in the works you’ve cited and the thought experiments is that LOtR, Ender’s Game, The Dragonlance books, etc. all take place in more-or-less fully developed and internally consistent worlds, whereas the thought experiments provide no additional context to differentiate them from our world.

    In the first example, why is everyone in France going to die? Who has the power to make it so, and why will killing my neighbor prevent them from doing so? Is it the person providing me this information, or someone else? Why are they credible? Could I choose someone else (myself for instance) to prevent the mass death?

    In the fictional works you mention, we’d have pretty good answers to questions of that nature, and more to the point, we believe that the characters in those worlds would understand the answers to those questions.

    McElroy is absolutely correct that hypothetical questions without any additional context don’t provide any real insight about moral reasoning, as they’re simply not well enough specified for us to understand what choice is actually being offered.

    • Sean

      I’m confident that most of those who hear this sort of scenario are charitable enough to understand that it is conceived so that no, you cannot choose someone else (it explicitly says ‘your neighbor’) that ‘the person providing you the information’ is ‘credible’ and not making it up (what on earth would be the point of a thought experiment with that punchline?) and that it doesn’t matter who is responsible or the mechanism by which it is accomplished.

      • TracyW

        But quite often, when making moral decisions, it does matter who is responsible, and the mechanism by which it is accomplished.

        For example, the government is firmly opposed to killing one healthy person to provide organs to save the lives of X other people but permits, even encourages, vaccines on the basis that they save more lives than they kill. And a lot of people appear to support the government’s position (and most of the ones who don’t say they don’t because they don’t believe the claims about vaccines’ cost-benefit info.) Perhaps most people are wrong to think so, but it’s hardly an uncommon position.

        Furthermore, we have years of story-telling to alert us that who is offering a deal does matter – it’s pretty darn risky agreeing to a deal with a devil.

      • TWAndrews

        Fair enough, but in that case you’ve postulated a world where some entity has the power to kill (or prevent the killing of) an entire nation on what amounts to a whim (power substantially beyond that of the main antagonists in the fictional works cited), and is willing to us that power to make you choose a person close to you as a sacrifice. I don’t think it’s a stretch to say that our moral intuitions would be radically different in such a world, which was disputed point.

        • Sean

          If your problem is that you think such a world would be ‘radically different’ above and beyond the conditions explicitly stated in the scenario, all you have to do is make it clear that *everything else is exactly the same*!

          But again, a charitable reader would understand that that is implied.

          • TWAndrews

            The point is that either a) everything else can’t be exactly the same if the scenario is possible–there must be some mechanism by which the specified events will come about–and so we’re missing important information or b) the scenario is arbitrary and doesn’t reveal anything interesting about real-world moral intuition.

          • Sean

            I can’t make heads or tails of what you mean by (b). With respect to (a), suppose I asked you the following:

            Is it morally permissible to torture and kill a random stranger you met on the street for fun?

            My sense of it is that no, it would not be morally permissible. I suspect you agree. I suspect you agree *despite the fact* that I have not specified ‘with a knife’ or ‘with a gun’ or with whatever else one could use to torture and kill a person. Indeed, I can’t think of any sort of torture-and-killing-mechanism that would change our evaluation of that case simply because the mechanism *doesn’t seem to be morally relevant*.

            If that’s true, why do you think our intuitions about the France case critically depend on how it is accomplished? Why would it matter if it were magic or a biological weapon or whatever?

          • TWAndrews

            It matters because we live in a world where it’s possible for one person to kill another, so we’ve got strong moral intuition about it (you’re correct, that I agree it’s not permissible). We don’t however, live in a world in which an entire country can be killed in such a way that me agreeing that my neighbor should die has a necessary effect on the outcome.

            For instance, we could say that someone will detonate a nuclear weapon in that will have the effect of destroying France if I don’t kill my neighbor. In that scenario, there’s no reason–other than the say so of the person with the weapon that they won’t destroy France anyway. Should I agree to kill my neighbor just on their say-so? Is it reasonable that they have the power to destroy an entire country, but need me to kill my neighbor?

            The point is that yes, you can eventually specify the situation sufficiently, but when you do you end up somewhere totally ridiculous, i.e. there’s someone who can kill all of France but is willing not to if I kill my neighbor, and I think that such an absurdly contrived scenario obscures more than it reveals about our moral intuitions.

          • Sean

            This is the last i’ll say on the subject because this is going nowhere.

            We *do* live in such a world: as you just conceded, it is both logically and nomologically possible to kill everyone in a country (with nuclear weapons, e.g.). There are plenty of other ways it might be done, and nothing you have said supports your contention that however it is accomplished, this is ‘important information’ and all the worse for these types of thought experiments for lacking it.

            Perhaps you want to say that this scenario is just very *unlikely*, and that this unlikelihood matters. But that doesn’t seem true either. Consider a different case:
            Suppose there is a resurgence of Nazism and that these Nazis take power. Like the Nazi regime in Germany, they are attempting to exterminate Jews. You are sheltering a Jewish family in your home and are asked by a Gestapo agent if you know where that family is located.
            I’m sure you’d agree that this scenario is very unlikely. It is highly improbable that a modern, 21st century liberal democracy would transform into an odious Nazi dictatorship. Perhaps it is not *as* unlikely as the France case, but it is very unlikely nonetheless.
            Surely this fact does not mean the case above can tell us nothing about our moral duties (specifically any duty we have to always tell the truth). But if that’s true of the Nazi case, why isn’t it as true of the France case?
            This stuff about ‘say so’ has already been addressed. Objecting that the case as described really isn’t the case as described — because, hey, i’m gonna suggest it depends on someone’s say so and maybe he’s lying! — is no objection at all. It’s rewriting the case. It’s a lack of charity or simple obtuseness.
            Finally, this is strongly reinforcing what I’ve long thought about hostility toward thought experiments. It always seems like an unprincipled way for people to avoid the false implications of their pet theories.

          • TWAndrews

            I’m not saying that it’s implausible that a whole country could be killed. I’m saying it’s totally contrived that there’s a connection between a country being killed and you choosing that it should be your neighbor instead. If you posit that connection in your thought experiment, you need to be able to provide some details on why and how such a connection exists.

            Your example with the Nazis is better–the fact of people sheltering fugitives is one that reoccurs throughout history, and so it’s less of a thought experiment than just imagining yourself in situations that have already occurred many times. Not coincidentally, I believe that the moral intuition around this situation is largely agreed upon.

            I’m not sure what “pet theories” you’re referring to, but my point is that if you need a totally contrived example to find the false implications of them, you really haven’t found anything of meaning. You’ve just assumed away salient information that people use for moral reasoning in the real world.

          • TracyW

            Well, let’s say that medical research indicates that a vaccine, given to 100,000 people will save 1000 people from dying of a disease, at the cost of 1 person dying a painful death from a bad reaction. There is no reason to think that that 1 person who dies from the vaccine was any more likely to die from the disease itself than another person plucked at random.

            Is it morally permissible to give that vaccine?
            I think that most people’s intuitions are yes, indeed it may be morally mandatory to give the vaccine (herd protection.)
            Thus, our intuitions do depend on how something is accomplished.

          • Sean

            You miss my point here, Tracy. Obviously the moral facts — and our intuitions about them — supervene upon non-moral facts like the number of people who will be harmed or saved by our actions, our intentions, and so forth. These are what you have changed around in your scenario.

            My point, however, was that they do not supervene upon *all* the non-moral facts. Some non-moral facts are just not morally relevant, so it does no good to invoke their lack of specificity in thought experiments.

            Thus in my case, it doesn’t matter *how* one might commit torture and murder. I.e.,they might use a knife, a gun, a laser beam or magic. All that matters, it seems to me, is that the act is causing someone serious harm for the purpose of fun. Similarly, I don’t see why it matters that the issue in your case is vaccination as opposed to, say, circumcision, magic, or whatever else. All that matters is that there is a certain expected benefit, a certain low risk of harm, etc.

            Now, we *could* get a different sense from my case if we specified that by committing the torture-murder, we save the universe from obliteration (or something similar). But this would be changing the relevant non-moral details, details that as I said at the start are either explicit or implicit in the original scenario.

  • Matt Pierce

    The difficulty I have with McElroy’s essay is that she says the dilemma posed is a sham because it poses unrealistic alternatives, but then she effectively goes on to answer the dilemma (by arguing the myth of the “greater good” should never be used to violate individual rights). In other words, McElroy shows she clearly recognizes the moral significance of the dilemma posed, and is able to take a view on it. As such, I don’t think the scenarios cited can be viewed as shams. Extreme hypotheticals are often useful to help flesh out a more general case of a conflict between moral principles. That is, they can help us to identify what our intuitive responses say about our values, and how we may then apply those values to more specialized cases that occur in the real world. Where these extreme hypotheticals cannot help is the extent to which they imply that we should be able to make an informed moral decision based solely on the extremely abstract and limited information provided in the question. Morality is much more nuanced than simply imposing a generalized hierarchy of values on a specific case. It seems what McElroy actually takes for a “sham” is the mere suggestion that utilitarian considerations could be used to justify an infringement on individual rights. Far from rejecting the so-called sham dilemma outright, she has in fact taken it beyond its philosophically useful limits.

  • jtf

    And now we know you read Dragonlance…

  • good_in_theory

    I don’t think “magic” actually refers to the same sorts of differences from our reality in the McElroy’s quotes and the content above.

  • Michael Zigismund

    I’m don’t think I agree with McElroy either (I believe there is merit to outlandish thought experiments), but I do see her point.

    Unlike fiction, it is unclear what the actions in thought experiments are supposed to represent in the real world. The trolley car might represent foreign aid to some. Based on your prior posts, however, it seems you would disagree that such a representation is realistic. Morality in fiction, on the other hand, is only compelling when we can relate to it.

  • http://www.sandiego.edu/~mzwolinski Matt Zwolinski

    I had the same thought. I don’t know if this is McElroy’s concern or not ( I doubt it), but if one takes a naturalistic (Humean?) approach to moral intuitions, then there seems to be good reason for doubting their epistemic value in bizzarre situations.
    In other words, if one views intuitions not as innate guides to objective moral reality, but as responses that we have evolved to have in response to certain kinds of important situations in order to promote survival, group cohesion, etc., then there seems to be good reason to doubt that our moral intuitions about cases utterly unlike those we have evolved to cope with have very much value at all as far as figuring out what the right thing to do is.
    This is out of my area of expertise. But I take it that this is the sort of concern that Josh Greene is trying to evoke in his paper on “The Secret Joke of Kant’s Soul”? http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-KantSoul.pdf

  • Jameson

    I agree with this critique of McElroy’s essay, and I find that a better critique of justifying State power with thought experiments was posted on this web site just recently. “Morality for adults,” I think it was called. Thought experiments provide us with the opportunity to exaimine special cases, which inevitably fall outside the usual circumstances to which our moral system is well-adapted. Appealing to something like the incompleteness theorem in logic, I submit that our moral system will always have such short-comings. But it is indeed a fatal conceit to presume that the State has the kind of omniscience necessary to deal with each and every situation which appears not to be sufficiently dealt with by our common understanding of personal rights.

  • Cory

    I don’t see that big of a deal with McElroy stating that these questions come with very limited background information.

    Even within your setting of Star Wars, the Force is an amoral natural phenomenon used by bad people and good people. Take, for instance, the Jedi Knight Kyle Katarn (extended universe, so not cannon) who stated that there is no Light side or Dark side of the Force but only people who use it for good or evil.

    So even in these scenarios, morality is in flux. Is Kyle Katarn evil for using the traditionally Dark side powers of Force Lightning or Force Choke if he does it for the Light side of the Force and the Jedi Order? Or take a cannon example – is Qui-Gon Jin being moral when trying to use a Jedi Mind Trick to force (literally and figuratively) Watto to give him parts needed to fix Queen Amidala’s broken starship in Episode I?

    I think McElroy has made a legitimate argument about context in these alternative universes. Just like how it is or is not moral for a Jedi to steal from a spaceport junkyard (or try to anyway).

  • martinbrock

    Vicariously enjoying a moral dilemma in a fantasy doesn’t seem to contradict McElroy’s point. Watching Star Wars might stimulate my moral faculties somehow, but I need not reach conclusions that I expect to apply in real life this way.

    In fact, fictional scenarios often encourage me to think in highly polarized terms, with incredibly despicable, distinct and discernible bad guys battling incredibly virtuous good guys, precisely so that I may enjoy vicarious violence toward the bad guys that would be terribly inappropriate in the morally ambiguous world I actually inhabit.

    Unrealistic ethical dilemmas arguably are less like escapist fiction, enabling me to imagine behavior I would never enact in reality, and more like Zen koans, disrupting my confidence in a systematic ethics by following its implications to absurd conclusions.

    A dilemma with this effect seems useful to me, if not practically applicable, because it leads me to reexamine assumptions I otherwise take for granted.

    On the other hand, a philosopher may also use unrealistic dilemmas to lead me toward ethical assumptions that he does not wish me to examine. A philosopher using my visceral reaction to the Trolley Problem as we’ve recently discussed, to persuade me that I favor transfers from the USAID to the Ethiopian government, does not encourage me to reexamine questionable assumptions. He rather encourages me to generalize from questionable assumptions toward even more questionable assumptions without examining anything.

  • Joseph R. Stromberg

    In
    charity, I think we can say that fictional problems that are tightly drawn and
    reflect (perhaps) things that could happen are not as much of a nuisance as
    unlikely ones

    meant to test our moral intuitions in a sort of vacuum. They might even shed some light by bringing
    out hidden assumptions. But if so, why
    not discuss this-worldly problems directly?

    The
    Trolley, or at least the trolley discussion, crashed in the neighborhood of a
    claim that no moral theory that didn’t allow for tactical bombing could be
    “plausible.” (Are we thinking about morality or just making air-power
    bureaucrats feel better?) Why not debate tactical bombing, etc., directly
    rather than the trolleytarian stand-in? We would need to know if this tactical
    bombing takes place near a largely uninhabited battlefield and is meant as
    close support for actual infantry and tanks, or does it include mistakenly or
    callously bombing and strafing Warsaw (for example) on the way to finding the
    Polish cavalry? Or does it, on the
    present model, involve blowing big holes in villages and heavily populated
    cities, while damning the wily foreigners for the cheek of having *people* in
    their country who get in the way of our targets? (See any U.S. Defense Department
    press conference beginning with the First Gulf War.)

    This
    drops us in it — Just War Theory and all that -– but short of that, how
    exactly are we to address such issues, much less do so “plausibly”? If we might in the end have to condemn modern
    warfare rather completely, whose fault is that? Maybe it’s better to know where
    we stand. If someone “strikes,” we still need a direct object before we can get
    anywhere. (“Bush-Obama strikes”: Whom? What? Why? Κτλ.)

  • famadeo

    I have to side with McElroy. I don’t see the issue as fiction/reality, I see it as genuine possibility of choice/extreme life boat scenarios. If you find yourself in a position where you must decide between one innocent life and an entire nation, the moral value of your choise is practically 0, because, in this case, right and wrong are almost completely contaminated by eachother. Compare that to feeding your baby in Godzilla. Give me a break.

  • http://profiles.google.com/stringph Thomas Dent

    Well, actually there are many real or realistic dilemmas that undercut McElroy’s assumptions completely.

    Real dilemmas mostly occur in situations that involve dependants: human beings who for whatever reason depend on others for survival. No matter how perfect the world becomes there will always be dependents (e.g. newborn babies) and some people must always have moral duties to care for them in the sense that deliberately not doing so would be equivalent to murder.

    Consider a situation where there are 5 dependants who cannot survive by themselves but need some resource that you can control. You are isolated and due to an accident the supply of this resource is low enough that if you try to share it out equally at least 2 dependents will probably die – but if you leave the weakest one to die immediately the remaining 4 will probably survive.

    Utilitarianism gives you a clear answer (kill the one and save the others) which may or may not be right; is libertarianism any help at all in this unfortunately quite realistic situation? Or is there still not enough information to decide?

  • Pingback: Zwolinski v. Smith | Bleeding Heart Libertarians()

Set your Twitter account name in your settings to use the TwitterBar Section.