A lot of very smart non-philosophers are attracted to some form of utilitarianism. Some of these people, like Ilya Somin and Mike Rappaport are generally sympathetic to the idea of bleeding heart libertarianism, but think that utilitarianism does a better job explaining and defending its attractive qualities than do appeals to “social justice.” Actually, even though he sounds less sympathetic, I think this is basically David Friedman’s position too (even though he ultimately rejects utilitarianism as a fully adequate moral theory).

In contrast to these fellow travelers, most philosophers do not believe that utilitarianism is an adequate moral theory. In this post, I want to set out a few reasons why. But first, it’s important to clarify two points that non-philosophers sometimes fail to appreciate about utilitarianism and moral theory.

The first, somewhat pedantic, point is that utilitarianism and consqeuentialism are not the same. Consequentialism is best understood as a family of moral theories, united in the agreement that consqeuences alone determine the rightness or wrongness of actions (or rules, or practices, or motives – see here for discussion of the complications). Utilitarianism is a particular type of consequentialism that specifies the kind of consequence that matters – not wealth, not human achievement, but utility. It follows that utilitarianism is necessarily a more controversial theory than consequentialism.

The second point, and the more important one, is that believing that consqeuences matter for moral assessment is not enough to make you a consequentialist. Any plausible moral theory is going to hold that consqeuences matter at some level. What distinguishes consequentialism from other moral theories is its claim that consequences are the only thing that matter. This is a much stronger and much less plausible claim. Think it’s intrinsically important that people get what they deserve? Think you have some reason to keep your promises simply because you made them (and not because of the good that you expect to produce by keeping them)? Think that it’s better for guilty people to be punished than innocent people (again not just because the consequences are better)? Then you’re not a consequentialist.

So why do most philosophers reject utilitarianism? Here a few (quite non-exhaustive) reasons.

1) The Separateness of Persons - Both Rawls and Nozick claim that utilitarianism does not sufficiently respect the fact that persons are separate. But what does this mean? I’ve written more extensively on this topic elsewhere, but the basic idea is that because utilitarianism focuses exclusively on maximizing total utility, it fails to take into consideration in the right sort of way the way in which utility is distributed among different persons. Most of you are probably familiar with the worry that utilitarianism sanctions injustice – the slavery of the few, for example, so long as it benefits the many. Worries about the separateness of persons are related to this, but more fundamental. The reason utilitarianism allows us (or mandates us!) to commit injustice against the few is that it is a fundamentally collectivistic morality. In focusing exclusively on aggregate happiness, it fails to show proper respect to individuals. As Nozick put it:

To use a person [for another's benefit] does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has. He does not get some overbalancing good from his sacrifice (Nozick, ASU, p.33.)

It’s important to realize that the worry here is not simply that utilitarianism sanctions injustice. In response to that objection, utilitarians can respond with a whole host of pragmatic reasons to suggest that injustice won’t really maximize utility in the long run. These responses may or may not ultimately work. But they miss the deeper problem. The deeper problem is that even if utilitarianism gets the right answer about how we should treat one another, it gets that answer for the wrong reason. Surely, the reason it’s wrong for me to kill you is that to do so would be to violate an obligation I have to you. It’s not that the world as a whole will be a somewhat happier place with you in it than without.

2) Utility doesn’t always matter, and isn’t the only thing that matters - Utilitarianism says that utility, or happiness, is intrinsically valuable. It says, in fact, that utility is the only thing that is intrinsically valuable. But what does this mean? And how plausible is it? Does utility mean pleasure? Preference-satisfaction? Happiness in some broader, eudaimonistic sense? The more we start to think  carefully about how many different things we might mean by “utility,” the less obvious it seems that any one of those things could really be the only thing that has intrinsic value.

And why should we believe that utility has intrinsic value in the exclusive and universal way that utilitarianism suggests? If a child molester derives pleasure from fondling young kids, why on earth should we think that that pleasure has any moral value at all? Why should we think that whether child molesting is wrong or not depends on the empirical question of whether the child’s suffering outweighs the molester’s happiness? Another, related, point: while most of us would agree with utilitarians that the pain and pleasure of animals has some moral relevance, why should we think that it has the same kind of moral relevance as pain and pleasure in human beings? Is a unit of pleasure really always just a unit of pleasure, and that’s all morality has to say?

3) Egalitarianism without foundationsRelated to this last point, utilitarianism counsels us to be absolutely impartial in the way we measure the utility effects of our actions. Your own happiness counts for no more and no less than the happiness of any other person. But again, why should we believe this? Don’t we owe something more to our own selves than we do to a complete stranger? Aren’t we entitled to spend the $5 we worked for and earned on a cup of coffee for ourselves, even if that $5 could generate a larger sum of utility if it was spent in some other way? Don’t we owe more to our children than we do to total strangers? Utilitarianism’s egalitarian premise – that all utility everywhere has the same moral value, no matter whose utility it is and no matter what relationship (or non-relationship) you stand in to that person, is not only implausible. It is an almost entirely unargued-for assumption. And it is one that we should reject.

 

Notice that none of these objections is avoided by moving from act-utilitarianism to rule-utilitarianism (or rule-consequentialism). They are deeper moral problems stemming from the underlying structure of consequentilist or utilitarian theories in general. If it works (and that’s controversial), rule consqeuentialism merely helps the consequentialist avoid some of the more troublingly counterintuitive practical implications of her theory. It does nothing to address the underlying theoretical defects.

Notice also that we haven’t even broached the topic of whether utilitarianism provides a good foundation for libertarian politics. And there are, as Kevin Vallier has suggested in several posts on this blog, good reasons to think that it is not. The problems we have discussed here are simply problems with utilitarianism as an adequate moral theory. Utilitarianism’s problems as an adequate libertarian moral theory seem (especially in virtue of points 1 and 2 above) likely to be even more severe.

Print Friendly
 
  • http://www.facebook.com/profile.php?id=675828429 Dale Dorsey

    “Think it’s intrinsically important that people get what they deserve? Think you have some reason to keep your promises simply because you made them (and not because of the good that you expect to produce by keeping them)? Think that it’s better for guilty people to be punished than innocent people (again not just because the consequences are better)? Then you’re not a consequentialist.”

    All of these claims are compatible with consequentialism.  To say that something is “intrinsically important” can be treated as tantamount to saying that it is “intrinsically good”.  And so consequentialists can and do believe it is intrinsically better for the guilty to be punished than the innocent, that you have an intrinsic reason to keep your promises (because keeping one’s promises is intrinsically good), and that it is intrinsically important (i.e., intrinsically good) that people get what they deserve.

    • http://www.sandiego.edu/~mzwolinski Matt Zwolinski

      Fair enough. I was a bit sloppy in the post in speaking in a way that makes it natural to translate what I said into states-of-affairs talk, and thus render it compatible with consequentialism.
      But your suggested restatement below (“Think one has decisive reason to give people what they deserve, despite the fact that doing so would produce worse outcomes overall?”) seems too strong to me. It’s not the decisiveness of one’s reason for action that makes a theory non-consequentialist, is it? It’s more like the agent-relativity. A non-consequentialist will say that you have special reason to keep the promises that you make; it’s not just that you have reason to act in a way that results in more promises being kept. That reason needn’t be absolutely decisive; one could have a kind of threshold deontological position, for instance. But it is necessarily agent-relative.

      • http://www.facebook.com/profile.php?id=675828429 Dale Dorsey

        Also not sure about that.  It depends on whether one thinks that the good is necessarily agent-neutral or agent-relative.  Most of the time people think that consequentialists commit to the promotion of agent-neutral value.  But that’s not a defining feature of the position.  You’re right that the restatement is too strong.  Try: “Think one has reason to give people what they deserve, irrespective of whether doing so is either instrumentally or intrinsically good?”  How’s about that?

        • http://www.sandiego.edu/~mzwolinski Matt Zwolinski

          Right. But once we move away from defining consequentialism in terms of a commitment to agent-neutral moral value, then I don’t know how to define it in a way that makes clear what the distinction between consequentialist and non-consequentialist theories is. Suggestions?

          • http://www.facebook.com/astrekal Alex Strekal

            Instrumentalism vs. intrinsic value seems to be pretty vital. In my mind, consequentialism goes hand in hand with instrumentalism – and as such, is incompatible with any notion of intrinsic value. To the extent that there are consequentialist theories that try to base themselves on intrinsic value, I’d just say that they contradict themselves (and therefore, in conjunction with what you said about utilitarianism above, utilitarianism contradicts itself).

          • Dale Miller

            No, this can’t be right. After all, the most straightforward form of consequentialism says that right actions are those that best promote the good, where the good just means the totality of those things that have intrinsic value. It might be closer to the truth to say that consequentialists believe that the only appropriate response to those things that have intrinsic value is to bring as much of them into existence as possible, as opposed to honoring or respecting them in some other way. (Pettit says something roughly along these lines.) 

          • http://www.facebook.com/astrekal Alex Strekal

            I don’t follow. In my mind, if we’re really consequentialists, we’re inherently making our values contingent – it opens the possibility that our values don’t apply in some contexts, due to the consequences. Which means they aren’t intrinsic. They’re valuable either because of their consequences or to some extent *not valuable* because of their consequences.  

            Consequentialism makes values potentially subject to limitations. You can’t so limit something that’s intrinsically valuable.

          • Dale Miller

            I think that the confusion here is the notion that there is something “deontological” about the proposition that something has value for its own sake, i.e., intrinsically. And this may come from conflating “intrinsic value” with “intrinsic rightness.” For the consequentialist, it is (roughly) right to say that only consequences matter where rightness is concerned, but it is only relative to a theory of the good—an account of what has intrinsic value—that we can judge how good different sets of consequences are.

          • http://www.facebook.com/astrekal Alex Strekal

            Ok, but I don’t see how that answers the main problem I’m trying to get at – that, whether you want to call it “deontological” or not, the notion of intrinsic values makes consequences blatantly not be the only thing that matters – consequences are considered relative to what is actually fundamentally mattering – the values themselves. We judge how good the consequences are by our values or “theory of the good” - but then everyone does that.

            What is consequentialism then? What makes it distinct from a method that any of us could apply without being commited to a particular normative ethical theory? We all have values that constitutes what we consider the good. And we all judge phenomena and consequences by those values. Consequences being all that matters doesn’t come into it; rather, the relevance of consequences is implicit in most moral discourse itself.  

          • purple_platypus

             The consequences that matter (for the utilitarian) just ARE the intrinsic values of states of affairs. At least according to the utilitarian, your argument rests on a distinction without a difference.

          • http://www.facebook.com/astrekal Alex Strekal

            What is “the intrinsic value of states of affairs”. At the moment, this seems a pretty nebulous basis for consequentialism.

          • Richard Chappell

            One proposal: Consequentialists think that you ought to choose to bring about the (antecedently) most desirable outcome, whereas non-consequentialists think that there are constraints that may prevent you from (permissibly) bringing about even an outcome that you ought (“pre-morally”) to want.

            (For further detail, see section 5 of my ‘Fittingness‘ paper [pdf], forthcoming in Phil Quarterly.)

  • John

    Matt, 

    I think you are right in your criticisms of utilitarianism, but I also think you may have left off the most damning problem, namely that utilitarianism requires that we care about agent-neutral value. Most economists, I suspect, who think they are utilitarians are really contractarians who accept that consequences matter from the point of view of individual choosers, not from the agent-neutral, total or average utility point of view. What they care about is mutual advantage, not utility. As a contractarian, one can be a consequentialist and think that consequences matter, but also be an individualist and subjectivist about value. Ultimately, this view will involve a complicated form of deontology, but the constraints are justified teleologically–at least on Gauthier’s view. Jan Narveson is the perfect person to weigh in on this as he was a utilitarian originally but was convinced by Gauthier that contractarianism was more sensible.  

  • http://www.facebook.com/profile.php?id=675828429 Dale Dorsey

    What you would need is something more like: “Think one has decisive reason to give people what they deserve, despite the fact that doing so would produce worse outcomes overall?”  Something like that.

    • Bongstar420

      I know this is a long time after the fact for this commentary, but since when is there a way to objectively delineate deserving anything across a probable infinite system of variable conditions in accurate absolute terms?

      If I deserve a particular outcome, it is because I was able to make an “agreement” with another sentient being. Otherwise, the only coherent definition I could find for “deserving” a particular outcome is to refer to what actually happened as what was “deserved”.

  • http://www.facebook.com/astrekal Alex Strekal

    I’d consider myself a consequentialist, but not a utilitarian. Yet, according to what you’ve laid out, I can’t be a consequentialist – because I find the notion of consequences being “the only thing that matters” to be incoherant. At the same time, according to the alternative you provide (things being intrinsically good or bad), I don’t live up to that moral position either. I both deny that anything is intrinsically good or bad and that “consequences alone matter”.  

    A qualm with how you’re defining things, Matt:

    “Consequentialism is best understood as a family of moral theories, united in the agreement that consqeuences alone determine the rightness or wrongness of actions”

    And:

    “What distinguishes consequentialism from other moral theories is its claim that consequences are the only thing that matter.”

    Is consequentialism really a family of views that consider consequences *alone* to matter? And what coherance is there to the notion of “consequences alone”, if consequences inherently have to be measured relative to particular values or desires? It would seem to be that case that for any given person, consequences can only matter contingently. What’s at the heart of the contingency? The values themselves – the values matter, and the consequences matter only in relation to the values (therefore, “consequences alone” just isn’t on anyone’s radar).

    Consequentialism then inherently becomes a pluralistic landscape of moral views – depending on what standards we judge consequences by. The notion of consequences in a vacuum is obviously not useful, as it provides no specific information. So I’d say in some sense consequentialism never can really be “just about consequences”; it’s more of a method than a standard in and of itself.

    • Bongstar420

      How could people be consequentialists and supernaturalists (what most people claim) at the same time? It seems to me that supernaturalism requires/results (in) non-consequentialism to stay coherent.

      I don’t believe people are actually supernaturalists as they claim and are therefore consequentialists and ultimately utilitarians. They just have hallucinations, and think its ESP or something. The term utilitarian, as officially constructed, doesn’t appear to be accurate though. The usefulness of x is a self defined state and varies. There is utilitarianism outside of the greatest good for the greatest number but I seriously doubt it exists outside of sentience. It just so happens that the fuzzy notion of the greatest “good” for the greatest “number” is most probable over the long run what ever that might mean (likely to be expansive diversity because that gives sentience the most chances at persistence). And, no. Arbitrary actions are things that only happen from limited perspectives, or that is to say it is more of an idiom than an accurate description of reality. If it were possible to have a “god’s” eye view, I seriously doubt arbitrary would appear accurate. If you ask me, it is all about ulterior motives and lack of consciousness generating foundation for the notion of arbitrarity. The same thing goes for the appearance of non-consequential acts such as in ideologies. They may appear to be ignoring the importance of consequences from your perspective, but they in fact think the thoughts they do because they are serving a purpose- just not the purpose they think or say (depending on if ulterior motives actually exist which is very common).

      Morals are ill defined. They should be considered social relationships that are observed. Dogs have morals and so do bees. We add conscious thought and self reflection beyond what a dog or bee could do.

  • http://www.facebook.com/people/Kevin-Currie-Knight/100000158541035 Kevin Currie-Knight

    And there are several additional problems that, to my mind, no utilitarian has ever solved satisfactorily (though that may be because I haven’t read the right stuff).One is the problem of interpersonal comparisons between types of utility (never mind how vague a word that is). Not only is there literally no such thing as a ‘util’ (a unit of utility, such that we can compare utility among people), but different things may effect that “utility” in such different ways that they can’t be objectively compared. (Mill tried to distinguish higher from lower pleasures and argue that higher ones are just better, but that, and similar, arguments always seemed very arbitrary to me.The second objection I have to utilitarianism (which sets itself up generally as a moral realist theory) is that the word “utility” is vague enough that it can’t per se carry utilitarianism above the spectre of either relativism or subjectivism. What types of “utility” are important, and which are more important than others? What is the time frame with which the maximization of utility would have to occur in order for action x (that leads to such maximization) to be justified? Unless there is an objective and neutral way to answer these questions, utilitarianism doesn’t solve many of the hard moral cases (of course, other theories may not be able to resolve some of those questions either, but if utilitarianism aims at being a more objective or scientific moral theory, then this is a strike against it.)

    • Bongstar420

      Refer to comment above.

      There can’t be a unit of utility. I don’t see why there should be. I think your problem arises from the fact that the authors are likely attempting to effect change rather than describe reality. The notion of utility cannot be described in hard terms (assuming that fuzzy is unsatisfactory to you- this leads me to predict that you vote right wing, either lib, indep, repub, or con; more likely lib/indep?). As far as relativism or subjectivism goes, I do not see how relativism is an inaccurate descriptor of reality and subjectivism is accurate to describe individual sentient beings but is not accurate to describe the sum of all sets (reality).

      I’d say I am a utilitarian, and I’d also say that other learned people (like the commentators on this forum) would likely perceive that. I’ve yet to find a moral I couldn’t solve in a way that was satisfactory for me (I’m satisfied with some fuzzy definitions because sometimes fuzzy is as good a resolution you can get in objective terms from our perspective).

      As far as a more objective or scientific moral theory goes, any theory that is naturalistic fulfills this purpose against anything that is not. How many utilitarians are not naturalists? I say a moral cannot be reasonably justified without evidence. The value of those morals are to be determined by those who consider them or otherwise live by them (no one needs to be aware of am moral as these are social relationships and not rules instituted by authorities.

  • berserkrl

    Another problem with consequentialism is this: a) in order to get the best results in the long run, we need to commit ourselves to general policies rather than deciding everything on a case-by-case basis; but b) if we regard these policies as mere rules of thumb we end up sliding back toward the case-by-case approach, while c) if we regard them as more than rules of thumb, we’ve abandoned consequentialism.

    Or, to put it another way, although consequentialists, fairly enough, stress that consequentialism is a theory about what makes actions right, not a decision procedure, the decision procedure recommended by the more plausible forms of consequentialism tends to undermine one’s commitment to the theory.

    More here.

    • http://independent.academia.edu/DannyFrederick Danny Frederick

      Hi Roderick,

      My response to Matt is also a partial response to what you say here. But I’ll add a few bits.

      First, why should my adherence to a moral code be undermined by my discovery that general adherence to that moral code contributes more to human welfare/flourishing than does other types of behaviour? Further, given that our cognitive limitations rule out act-consequentialism, it seems consequentialists need a rule-based morality. Surely, their adherence to a particular moral code would be fortified rather than undermined by their discovering that the code is better than others in contributing to human welfare/flourishing.

      Second, there is a difference between a rule of thumb and a ceteris-paribus rule. The former gives only an approximately right (or good enough) answer for a commonly encountered range of circumstances, and a wrong answer in other circumstances. The latter gives a dead right answer for normal circumstances, needing correction only in exceptional circumstances. Correct moral rules are ceteris-paribus rather than rules of thumb. So there need be no tendency to slip into act-consequentialim in ordinary circumstances. Given the impossibility of act-consequentialism for creatures like us, that is just as well.

      Third, the fact that there are always exceptions to moral rules, and that when they occur we have to engage in deliberation, including taking account of consequences, is not only a familiar fact, it has its parallel in the sciences. Waismann argued that our empirical concepts have ‘open texture’ and this point was carried farther by Kuhn and Feyerabend who pointed out that flexibility and openness on our empirical concepts is required if they are not to be too brittle for use in real life. And Popper argued that all laws of nature are ceteris-paribus and that novel exceptions are guaranteed to arise, if not by evolutionary emergence, then by human free will. In consequence, a moral code consisting of a system of rules is never sufficient for deciding how to act. In unusual circumstances there is always a need for deliberation.

      • Bongstar420

        In my work, the only thing that exists is rules of thumb. Further more, rules of thumb do not work without a case by case consideration. It appears to me the the best a “god” could muster would be a rule of thumb. Reality is such that absolute knowledge (in addition to other omnipotent attributes) is not possible for an observer (sentient being that thinks or otherwise processes information). I don’t know if any of you guys know this, but your comfort level with fuzzy or absolute conclusions is genetically determined.

        Look, as far as I am concerned, I don’t care what language you use to describe it…In no way is it possible to have an uncaused cause. One cannot consider any notion independent of cause and effect- that goes for social relationships (morals). Hence non-consequentialism is a misnomer (the only way cause and effect doesn’t matter is to the extent that it is not occurring in the ultimate sense).

        Referring to what past philosophers have said or think seems fairly irrelevant to me. I came to the concept of utilitarianism without reading philosophy. In fact, I am hardly familiar with any authority figure in philosophy and am generally not concerned with what any particular authority has established. References to who said what is pretty irrelevant to our purposes unless you are trying to benefit from the career’s of philosophy (like that is your job or you wish to generate the perception that you are an authoritative figure in philosophy for influence). This is annoying to me. For example, I was forced to read some BS from Decartes. It doesn’t matter who wrote those ideas and when unless I am going to infer relationships outside of that fact. The only understanding this provided me was that power was more of an important aspect to history than objectivity by a huge margin (I mean literally magnitudes). I seriously doubt this is what the people who instituted the practice intended- for me to come to the conclusion that might was more important than right unless that is how you derived your might which is usually not how things happen.

        I can put it like this. If there is no consequence, then there is no reason to care at all. If it serves no purpose, then there is no reason to care at all. So it follows, if you wish to impose a rule that is of no consequence and no purpose, then I do not care at all. Go head. Take the liberty. I will support it because it will be of absolutely no consequence or purpose and therefore will not interfere with my goals or anyone else’s.

        My position is this. There is reality. We are trying to discover it. The only valid answers are accurate ones. I think philosophers are severely lacking in the physical sciences. One cannot derive a proper morality with out a proper understanding of the nature or reality which only exists in physical terms (people have misdefined physical btw though I do not believe real scientists have).

  • Dale Miller

    It is probably true that most philosophers reject utilitarianism. Then again,  it is probably also true that most philosophers reject Kantian ethics, Aristotelian virtue ethics, contractarianism, etc. That only means that there is no particular moral theory that most philosophers endorse. Does utilitarianism enjoy less support among philosophers than any other moral theory? If so, what is that theory (and what’s the evidence for thinking that it enjoys more support than utilitarianism)? So far as I’m aware, there is no journal dedicated to work on a specific moral theory other than Utilitas—which is, of course, edited by a philosopher.

    • Dale Miller

      I had missed the link to the survey in the OP, so I’m relieved that it bears my claims out. :-)

    • Bongstar420

      I would imagine that anyone who rejects utilitarianism is letting the word be to narrowly defined since rejecting the notion serves a purpose in its self which is utilitarian (it serves that individual’s purpose otherwise they wouldn’t think it).

  • OutOfTheBox

    I’ll argue that deontology and utilitarianism both have specific-case uses and when used properly they create the BHL ideal.

    Claim 1:  Self-interest and altruism are sub-economic forces that exist prior in concept to  social-contract, and therefore the problems they cause and the potential they offer necessarily contrain the debate over the ideal social contract (I’m a rights come from social contract kind of BHLer)

    Claim 2:  Both forces if left uncapped can destroy a society and therefore must be stopped.  To be specific, self-interest causes individuals to go too far and altruism (when voted into government) allows government to go too far (or more specifically, it provides cover for greed and foolishness to go too far in the name of good intentions).

    Claim 3A:  To stop individuals from going out of control, you need to use deontology — government must establish rights that protect individuals regardless of consequences.

    Claim 3B: To stop government from going out of control you need to cap spending, flatten taxes and eliminate unfunded mandates.  This forces all demands on government into spending, and spending can be prioritized.  By prioritizing spending, you turn government spending into a scalar utilitarian marketplace where ideas compete over limited funds and the highest possible priority grade — which forces results to matter.  This is the precise mirror image of capitalsm — a constantly improving economy that pursues social justice, powered by our collective good intentions.

    Big BHL Conclusion:  Government spending can be transformed into the mirror image of capitalism, and this changes everything.

    • Bongstar420

      Altruism is a misnomer. It does not exist. The reason for this is because any actual altruist that could exist would not survive or propagate very well as they give their energy to non-altruists. I have yet to see a non-selfish act. Mother Teresa (a historical figure thought of as an altruist) could not possibly have been an altruist. She was doing what she wanted to do. How is that a selfless act?

  • http://www.facebook.com/profile.php?id=723063759 Julian Bennett

    Nice neat article.

    It seems that since your opening paragraph is scrutinising the smart
    non-philosophers it would be nice to finish the scrutiny off with some
    kind of assessment in where they go awry.

    So, a lot of smart non-philosophers are attracted to utilitarianism whilst most philosophers are not.   However, much of your criticisms apply to consequentialism and most moral philosophers (as per the empirical study on what philosophers think) are attracted to consequentialism. So presumably where the smart non-philosophers go awry is with what distinguishes utlitarianism from consequentialism namely the theory of value. The smart non-philosophers have either a too constrained theory of value – they think that happiness is the only thing of intrinsic value, or perhaps they treat the concept “happiness” in a vague way to cover any kind of state that has intrinsic value.  

    Regarding egalitarianism, this is part of a theory of value rather than consequentialism.
     I find that a lot of smart non-philosophers are attracted to the idea that egalitiarianism is both true and false depending on the context – you ask them if they think black and white people, young and old, people of different nationalities,  should be equal before the law and have same rights, they tend to agree. You ask them if they and their family are more valuable than other people say strangers and they also agree.

    However if everyone thinks that they are more valuable than others  then we have a situation in which everyone is more valuable than everyone else. This  looks like something has gone wrong, like each person thinking that they are better than average driver or teacher (people do think this).  This is where we need a closer analysis to find out what has gone wrong or whether people really mean what they say.

    With the issue of the child molester and happiness we are again dealing with the consequences of a theory of value rather than with consequentialism per se.

    It is intuitive to think that we shold be able to keep what we earn, but then free will is intuitive too. Yet, determinism
    erodes the idea of desert and there is widespread support for this view from the sciences. It is also the most widely held view by philosophers who deal with this area.  So whilst I have the folk intuition that
    people deserve to keep what they earn I don’t find this intuition
    reliable when I consider the wider picture.

    This bring me to my main point of criticism. Most of your dismissal of consequentialism  (it is really a theory of value that many consequentialists hold that you are highlighting as problematic) consists in using folk intuitions to point out how counter-intuitive it is to attribute such things as value to child molesters, or equal value to strangers as to oneself, or to not attribute what we earn to ourselves on the basis of desert, etc.  This is an appeal to folk intuitions and it looks like an unsound strategy.

    The reason why it is an unsound strategy is best highlighted by the implicit premise of this article – namely that smart
    non-philosophers go awry with moral theories when dealing with a theory
    of value (because they think utilitarianism is plausible whilst most
    philosophers do not).  This tells us that folk intuitions are not
    reliable over the domain that deals with the theory of value and yet your main criticisms  are appealing to these unreliable folk intuitions over what is valuable (in a context to get the answers you want). 

     So, what is really needed in my view, is not a direct appeal to folk intuitions to determine what is valuable, but a more indepth discussion of a theory of value and this in turn needs to be related to what we know form other domains.

    • Bongstar420

      Why does the opinion of a non-observational philosopher matter (a philosopher with no physical science experience)? Why does an action without consequence matter?

      The problem of everyone being more valuable than everyone else is not a problem for explaining value per se. That notion exists because some of us highly skilled folk refuse to associate with much less skilled folk. That makes their value to each other (the low skilled folk) skyrocket since those people will actually be loyal and relatable to each other. It is not valuable to low skilled people to associate with others who cannot find value in them.

      As for child molesters. They are losers and they know it. They do not have futures, and they behave accordingly. The value they pursue is that of short term (centuries is short term btw). Child molesters do not intend to derive any value from their victims outside of sexual gratification. They know what they do is damaging and do it anyways though it isn’t much more damaging than a lot of normalized behaviors. Its the same way a person might shoot heroin into their veins until their veins collapse. Addiction exists because of the limited consciousness people have, the user does not concern them self with anything but immediate gratification. We pursue life out of addiction (we are addicted to life) not reason. Reason is used to justify or falsify addiction.

      There is no science without determinism, that’s why you find support for determinism from scientists. They are right, but they do have to presuppose naturalism and therefore determinism to be able to study things in the first place.

      Again, I do not see why a philosopher with no training in the physical sciences has any relevancy to actual life. They do not make observations, but analyze what they think “proper” logical relationships are (fyi, reality defines that not us).

      There cannot be an absolute theory of value unless it contains fuzzy definitions and outcomes (which would nullify your acceptance of its absolute nature) since that is what reality is from our perspective. The physical sciences are replete with fuzzy observations as an intrinsic aspect of the nature of actual reality.

  • http://www.facebook.com/profile.php?id=723063759 Julian Bennett

     

    Nice neat article.

     

    It seems that since your opening
    paragraph is scrutinizing the smart non-philosophers it would be nice to finish
    the scrutiny off with some kind of assessment in where they go awry.

     

    So, a lot of smart
    non-philosophers are attracted to utilitarianism whilst most philosophers are
    not.   However, much of your criticisms
    apply to consequentialism and most moral philosophers (as per the empirical study
    on what philosophers think) are attracted to consequentialism. So presumably
    where the smart non-philosophers go awry is with what distinguishes utilitarianism
    from consequentialism namely the theory of value. The smart non-philosophers
    have either a too constrained theory of value – they think that happiness is
    the only thing of intrinsic value, or perhaps they treat the concept
    “happiness” in a vague way to cover any kind of state that has
    intrinsic value.  

     

    Regarding egalitarianism, this is
    part of a theory of value rather than consequentialism.  I find that a lot of smart non-philosophers
    are attracted to the idea that egalitiarianism is both true and false depending
    on the context – you ask them if they think black and white people, young and
    old, people of different nationalities, should be equal before the law and have
    same rights, they tend to agree. You ask them if they and their family are more
    valuable than other people say strangers and they also agree.

     

    However if everyone thinks that
    they are more valuable than others then we have a situation in which everyone
    is more valuable than everyone else. This looks like something has gone wrong,
    like each person thinking that they are better than average driver or teacher
    (people do think this).  This is where we
    need a closer analysis to find out what has gone wrong or whether people really
    mean what they say.

     

    With the issue of the child
    molester and happiness we are again dealing with the consequences of a theory
    of value rather than with consequentialism per se.

     

    It is intuitive to think that we
    should be able to keep what we earn, but then free will is intuitive too. Yet,
    determinism erodes the idea of desert and there is widespread support for this
    view from the sciences. It is also the most widely held view by philosophers
    who deal with this area.  So whilst I
    have the folk intuition that people deserve to keep what they earn I don’t find
    this intuition reliable when I consider the wider picture.

     

    This bring me to my main point of
    criticism. Most of your dismissal of consequentialism  (it is really a theory of value that many
    consequentialists hold that you are highlighting as problematic) consists in
    using folk intuitions to point out how counter-intuitive it is to attribute
    such things as value to child molesters, or equal value to strangers as to
    oneself, or to not attribute what we earn to ourselves on the basis of desert,
    etc.  This is an appeal to folk
    intuitions to criticize a theory of value that tends to be held by
    consequentialists.

     

    Yet, the reason why it is an
    unsound strategy is best highlighted by the implicit premise of this article -
    namely that smart non-philosophers go awry with moral theories when dealing
    with a theory of value (because they think utilitarianism is plausible whilst
    most

    philosophers do not).  This tells us that folk intuitions are not reliable
    over the domain that deals with the theory of value and yet your main
    criticisms are appealing to these unreliable folk intuitions over what is
    valuable (in a context to get the answers you want). 

     

     So, what is really needed in my view, is not a
    direct appeal to folk intuitions to determine what is valuable, but a more in depth
    discussion of a theory of value and this in turn needs to be related to what we
    know form other domains.

  • Richard Chappell

    Hi Matt, a few thoughts:

    (1) I think it’s confused to call utilitarianism a “collectivistic” moral theory, for reasons explained here.

    (2) It’s not true that utilitarianism (properly understood) neglects the separateness of persons, unless by this you’re using this phrase stipulatively to  mean nothing more than that it doesn’t countenance deontological constraints.

    (3) A lot of the problems you identify under your second point are avoided when we understand “utility” to mean (as most contemporary utilitarians do) *well-being*.  If we accept an objective conception of well-being, we may think that engaging in vicious behaviour is not actually good for a person (however much pleasure they might get from it), and that there are indeed important differences in kind between the welfare of persons and non-persons.

    For anyone interested, a more general counterpoint to Matt’s perspective may be found at:
    http://www.philosophyetc.net/2011/11/why-consequentialism.html 

    • Bongstar420

      (2). Does statistics neglect the separateness of individuals?

      It requires it.

      (3). That person who derives well being from violence is valuable in some environments. That is why they exist at all. They are no longer valuable and will diminish in numbers over time as a consequence. Violence has become a big liability and will continue that trend for some time.

  • http://www.facebook.com/hidalgoj Javier Samuel Hidalgo

    In my view, a persuasive objection to agent-neutral consequentialism (not the newfangled agent-relative teleology) is brilliantly articulated by Francis Kamm in an interview with Alex Voorhoeve. I’ll just quote it in full:

    I tend to think that some of the philosophers who think that we have very large positive duties, but don’t live up to them, are not really serious. You can’t seriously believe that you have a duty to give almost all your money away to help others in need, or even a duty to kill yourself to save two people, as one of my former colleagues believes, and then, when we ask why you don’t live up to that, say, ‘Well, I’m weak. I’m weak.’ Because if you found yourself killing someone on the street to save $1000, you wouldn’t just say, ‘Well, I’m weak!’ You would realise you’d done something terribly wrong. You would go to great lengths not to become a person who would do that. That’s a serious sign that you believe you have a moral obligation not to kill someone. But when somebody says, ‘Our theory implies that you should be giving $1000 to save someone’s life and if you don’t do it, that it’s just as bad as killing someone,’ and he says, ‘I don’t give the $1000 because I’m weak!’, then I can’t believe he thinks that he really does have that obligation to aid or that his not aiding is equivalent to killing. Imagine him coming up to me and saying, ‘I just killed someone  for $1000, but I’m weak!’ Gimme a break! This is ridiculous. There must be something wrong with that theory, or else there is something wrong with its proponents.

    There is a deeper truth here. The deeper truth is that deontology becomes more plausible when you think of things from the first-personal perspective. You need to ask yourself: “what would I actually judge to be the right course of action if I was in such and such situation?” Once you genuinely think this question through, commonsense deontology becomes much more compelling.

    • Richard Chappell

      Though isn’t that just to report a psychological fact about us: that the consequentialist perspective is more difficult for us to internalize?  Unless one has antecedent reason to think that human psychologies are made for morality, this doesn’t seem to speak to the truth of the theory.

      (It’s also worth noting that there are dimensions along which even consequentialists can allow that our failure to save lives by charity differs from murder.)

      • http://www.facebook.com/hidalgoj Javier Samuel Hidalgo

        Well, I’m just skeptical that I have sufficient reason to reject moral intuitions about killing versus letting die and other components of standard deontology. Maybe some argument for consequentialism will come along that will show these intuitions to be totally untenable (obviously, I don’t think that argument has arrived just yet). Until then, I think these intuitions are more firmly grounded than the challenges to them.

        • Richard Chappell

          Fair enough, though I actually find it striking just how poorly deontological attempts at systematization (e.g. doing/allowing) match up to common-sense intuitions — assuming you agree with the standard intuitions that it’s right to turn the switch in the simple trolley case (thereby killing one), and wrong to let a child drown in Singer’s pond.

          • http://www.facebook.com/hidalgoj Javier Samuel Hidalgo

            You’re right: there are many problems with explaining or justifying the intuitions that motivate standard deontology. But it seems to me that the problems that confront consequentialism are much worse. So when consequentialism tells me to jettison my first-order deontological judgments about cases, I’m inclined to retain these judgements and reject consequentialism. As an aside, I like Jeff McMahan’s take on these issues.

          • Bongstar420

            Hey, I’d rather hire you as a source of profit rather than a consequentialist! At least I’d know what I would be getting and would be safe to assume that your behavior would be much more predictable and therefor controllable.

          • Bongstar420

            Why would a Human’s common sense intuition be “better” than that of a Chimpanzee?

        • Bongstar420

          Because actions without consequences are meaningful in what way?

    • http://www.sandiego.edu/~mzwolinski Matt Zwolinski

       Nice. Hadn’t seen that.

      You catch this related piece from Caplan?

      http://econlog.econlib.org/archives/2012/04/the_argument_fr.html

    • Bongstar420

      The idea that charity shouldn’t even be necessary or exist at all serves much greater purpose. I posit that charity doesn’t exist anyways since altruism is a misnomer. Some people “need” other people to “need” them. http://en.wikipedia.org/wiki/Narcissistic_personality_disorder

  • Damien S.

    ” most philosophers do not believe that utilitarianism is an adequate moral theory”

    The linked survey does not mention utilitarianism, so what is this statement actually based on?  The survey does have 25% accepting or leaning toward consequentialism, and a bit more to deontology, but the plurality are ‘Other’.  It also has 90% of them rejecting libertarianism, in favor of Other, egalitarianism, and communitarianism, so libertarians might not care what most philosophers think…

    • Bongstar420

      I don’t. But that is besides the point. I’d not put much stock on survey’s like that. People are way too prone towards ulterior motives to be trusted as always dependable reporters. Shit, people lie %30 of the time in public and %10 of the time with lovers (in my life I have actually lied maybe a few dozen times- that’s apparently not common with people).

      But, ya, I agree…People are unlikely to be libertarians, egalitarian, or communitarian because those systems do not serve high utility to them (in statistical terms). Most people will not gain from libertarianism, truly valuable people will not be valuable in an absolute egalitarianism, and communism will remove the rewards of being more valuable while expecting the contributions unless titles will are good enough in which case it could work (sex would likely be the reward that differentiates low and high value). I could live in all three systems as long as cheaters were eliminated which means it won’t happen.

  • Ilya Somin

    I can’t speak for other “fellow travelers.” But my claim was not that utilitarianism is a complete and adequate moral theory, but that utilitarianism effectively captures what is (potentially) useful about the concept of “social justice.” As I have said on various occasions, I don’t believe that utilitarianism is the only basis of morality. It must be combined with other considerations.

    • Bongstar420

      Isn’t saying that utilitarianism effectively captures what is useful self referential? Morality is a thing that depends on various aspects. It’s application ought to vary and an absolute probably couldn’t be determined outside of an “absolute rule of thumb” which doesn’t seem different from a normal rule of thumb. I’d say the best case scenario is on a case by case basis. Apparently some people can’t live with that being an adequate explanation of the phenomena. But, hey….I really am just some dumbass and apparently people like me carry low stock amongst career philosophers and the likes. I didn’t know a lot of the words on this forum until the day I read them (consequentialism for example).

  • http://millsrevenge.tumblr.com/ Mill’s Revenge

    I think non-philosophers are attracted to utilitarianism because they don’t attach to it the same baggage that philosophers do. 

    Utilitarianism is a great, vague first principle. Whether you’re an academic philosopher or just some random person, you have to start by asking what the goal of your ethical system should be.

    Is it despotism? Certainly not.

    Is it “equality”? Well, no. A utilitarian would reject that. It’s a slightly less abstract argument here — “equality” is derived from keeping people down, as Rush (not Limbaugh) put it in “The Trees” (“And the trees are all kept equal by hatchet, ax and saw.”)It’s very simple. The greatest good for the greatest number of people.

    “But that leads to a justification of slavery! Or Communism! Or agent-neutral consequentialism or another term that only modern philosophy grad students would know!” Wrong, wrong and wrong (I think). It leads to a lot of questions about how to create the greatest good for the greatest number of people. 

    And I can’t think of a better place to start.

    (Naturally, since I started posting, Ilya Somin put it more succinctly that I did.)

    But utilitarianism might make a comeback. I have three Twitter followers!

    • Bongstar420

      Ironically, Rush is not a valuable person unless you think blaming the victim is an adequate way to teach a lesson to society about the value of its policies. How else could Rush contribute to a persistent, diversity of sentience over the eons. If everyone fell into this guys MO, our odds of proper advancement (correct understanding and application of the facts) would lower. Shit, if we don’t consider the lesson of the bully, this man serves no purpose to the sum of our timeline and possibly other beings timelines (possible future relationships with extraterrestrials).

  • Mike Valdman

    There’s lots to say on this topic, but let me just respond to the charge that utilitarians assume, almost entirely without argument, that all utility everywhere has the same moral value.  One argument for this “assumption” is that, when it comes to utility, most of us already accept a form of temporal impartiality — that, all else being equal, one has no reason to prefer, say, a pleasure now to a pleasure of equal intensity, say, a week from now.  But then a puzzle arises: why insist on such impartiality across time but not across lives?  Deepening the puzzle is the worry that, in responding to someone who rejects temporal impartiality, one is tempted to appeal to the thought that, all else being equal, pleasure now would be no more or less pleasurable than the same pleasure would be a week from now.  But then, by the same token, and all else being equal, it would be no more or less pleasurable were it to reside in me than if it were to reside in you.            

    • Bongstar420

      All else is not equal. This statement is derived from observation

  • http://independent.academia.edu/DannyFrederick Danny Frederick

    Hi Matt,

    I do not see that you have raised any real problems for rule consequentialism.

    You say: “Think it’s intrinsically important that people get what they deserve?
    Think you have some reason to keep your promises simply because you made
    them (and not because of the good that you expect to produce by keeping
    them)? Think that it’s better for guilty people to be punished than
    innocent people (again not just because the consequences are better)? Then you’re not a consequentialist”

    I disagree. Your three questions can be re-formulated as rules of a rule-consequentialist theory.

    You raise three objections to utilitarianism and then you say that “none of these objections is avoided by moving from act-utilitarianism to rule-utilitarianism (or rule-consequentialism).”

    I disagree. The first objection, from the separateness of persons, can be avoided by the rule-consequentialist if he adopts a rule that people are self-owners, or something similar. And this does not mean that he gets the right answer for the wrong reason.  For moral rules assign moral obligations, so the reason it is wrong to break the rule is that it violates an obligation. The rule-consequentialist need not maintain that the reason it is wrong to break the rule is that the overall consequences would be better if the rule were not broken. He refers to overall consequences of rules only to decide which moral theory is the correct one. (This point is made in the article on rule-consequentialism to which you supplied the link.)

    The second objection, that utility is not the only thing that matters, plainly does not apply to consequentialism (you make this point yourself, in your third paragraph).

    The third objection, that we owe more to family and friends than to strangers, can also be avoided by the rule-consequentialist by incorporating a rule to that effect.

    I do not think that these are ad hoc responses to your objections. Morality is concerned with obligations; and these obligations are largely grounded in rules; and these rules, for the most part, do not refer to consequences. All that is pretty much commonsensical. But morality also has a purpose: to promote human flourishing (or something similar).  It is therefore necessarily linked with consequences (rule-consequentialism). But because it does not always speak of consequences, moral agents are not usually thinking of consequences when they are deciding how to act.

    This is not to say that morality reduces entirely to a set of rules. I think that is false, as I will explain in a response to Roderick that I will make shortly.

    • Bongstar420

      “The third objection, that we owe more to family and friends than to
      strangers, can also be avoided by the rule-consequentialist by
      incorporating a rule to that effect.”

      Kinship “law” in genetics. It is a social relationship that exists in some organisms. Essentially, there are genes that produce behavior that favors genotypes of close relatedness. Family is always favored in status over non-family.

      http://en.wikipedia.org/wiki/Kin_selection

      Ya. I’m a dirty, dirty naturalist

  • Anon.

    I think utilitarianism fails far before any of the (good) points you mention become relevant.

    How do we define utility? If we define it as in economics, as an ordinal ranking of preferences it is clear that utility is neither comparable nor aggregatable and thus useless for the purposes of utilitarianism.

    If on the other hand it is defined, more nebulously, as some sort of numerical measure of wellness of being it is useless because it is impossible to measure. Even if we could measure it, the individual potential for utility, which must clearly be different among different people, is both immeasurable and highly significant. Same for the “utility curve”. And even if we could measure that, it would lead to results that people find highly counter-intuitive (which I suppose might not be an issue for some), arising from effects of the steepness of the curve of every individual.

  • http://profiles.google.com/daviddfriedman David Friedman

    ” Some of these people … are
    generally sympathetic to the idea of bleeding heart libertarianism, but
    think that utilitarianism does a better job explaining and defending
    its attractive qualities than do appeals to “social justice.” Actually,
    even though he sounds less sympathetic, I think this is basically David Friedman’s position too”

    Pretty close. I argued against the hard line natural rights position in the second edition of _The Machinery of Freedom_, published in 1989, so perhaps you should say that BHL’s are generally sympathetic to my position?

    But I find both halves of the “social justice” terminology unconvincing, and not only because it is terminology mostly used by the left. I’m not sure it makes any sense to offer a moral evaluation of the entire structure of a society, which is what I gather some of you believe that the “social” part implies. A society is, in Hayek’s terminology, a self-generating order, not a deliberate construction, and I think moral evaluations are ultimately about the acts of individuals.

    And I don’t think all moral judgements are about justice. It may be a virtuous act for me to feed a hungry man even if he has no morally legitimate claim against me, hence my failing to do so is not unjust.

    Further, insofar as you are going to put any content into your idea of social justice,  I continue to find the idea of judging a society by how well off the worst off people (or some subset thereof) are unconvincing. It’s a good thing to help badly off people get better off, but it’s also a good thing to help people who are living pretty good lives to live much better lives—I don’t see why one would want to make a distinction of kind between those two, which I think talk about social justice does.

    Finally, going back to the historical issue and Adam Smith in particular, I don’t think he was particularly sympathetic to the worst off people—as I pointed out in the Cato Unbound discussion, the poor he was talking about were the bulk of the population, and he explicitly justified his concern for them on that basis.

    So far as the people at the very bottom, they show up in an interesting bit of his discussion of taxation. Smith thought tax burden should be proportional to income–that’s the first of his maxims of taxation, although he is willing to bend it a little to accept taxes that don’t fit if they have other desirable characteristics. He also thought that the wage level was largely determined by the cost of necessities, along the lines of what is sometimes called the iron law of wages. He concluded that taxing the necessities of the poor was wrong, not because it hurt them but because it didn’t–such a tax would result in a rise in wages, and so be passed on to the not poor.

    His conclusion was that the poor (meaning the working class) should be taxed by taxing their luxuries. That wouldn’t reduce the laboring population and drive up wages, because the bulk of the poor were responsible sorts who would respond by reducing their consumption of luxuries.

    What about the irresponsible poor? Things would go badly for them and their kids, but that wasn’t a big problem, because weren’t very useful sorts anyway:

    “If by the strength of their constitution
    they
    survive the hardships to which the bad conduct of

    their
    parents
    exposes them, yet the example of that bad conduct commonly
    corrupts
    their morals, so that, instead of being useful to society
    by
    their industry, they become public nuisances by their vices and
    disorders. Though the advanced price of the luxuries of the poor,
    therefore,
    might increase somewhat the distress of such disorderly
    families,
    and thereby diminish somewhat their ability to bring
    up
    children, it would not probably diminish much the useful

    population
    of
    the country.”

    I don’t think Rawls would approve.

    • Bongstar420

      “And I don’t think all moral judgements are about justice. It may be a
      virtuous act for me to feed a hungry man even if he has no morally
      legitimate claim against me, hence my failing to do so is not unjust.”

      The only morality in that is if you would wish to be treated the same way you treated him if you were in his circumstances. Since the actual being that you are would never end up in those circumstances, it is a non-sequitur. You do what I do. Look at them for a second, and move on wondering what is going through thier minds that led them to those ends.

      With social justice in general, that is a more difficult but similar situation. The problem is due to social class immobility. My opinion is that stems from Narcissism and kinship “law.” Basically, there is a first past the post contest that is always happening for new developments. The first people invest their winnings into a king of the hill situation. They do not accept people who are literally better or equal to them in standing and enforce their positions by virtue of the power of material occupancy (the king of the hill has a defensive advantage due to slope differentials). Equal oppertunity is good enough to solve the situation. Any opposition to the institutionialization of equal oppertunity is highly likely to not be competeteive for their positions (othwise equal oppertunity would make them look better rather than worse). You see, if you can’t feel confortable gaurenteeing that everyone gets a fair chance, how can you be certain you are the best fit for you position which had false scarcity of applicants imposed on it? At the very least, you would be more assured as to how “deserving” you were of your position “above” others if not every other person now and in the future benefitting from people being in their proper staions in life.

  • Damien S.

    “A society is, in Hayek’s terminology, a self-generating order, not a deliberate construction”

    And one of the things every society self-generates is some form of government, which among other things can then be a feedback mechanism allowing the society to somewhat deliberately construct itself.  Just as a brain is a mass of interacting neurons that can somewhat affect its own development by means to high-level evaluations and decisions.

    “I don’t see why one would want to make a distinction of kind between those two”

    Diminishing utility?  The fact that it’s a lot easier to make the worst off better off, than to make the well-off massively better off?

    • http://profiles.google.com/daviddfriedman David Friedman

      (quoting me)
      “I don’t see why one would want to make a distinction of kind between those two”

      (responding)
      “Diminishing utility?  The fact that it’s a lot easier to make the
      worst off better off, than to make the well-off massively better off?”

      That’s a difference of degree, not of kind. And it’s utilitarianism, which most of the BHL folk explicitly reject in favor of “social justice,” whatever that means.

      And whether it’s easier to make the worst off better off will depend on the particular circumstances.

      • Bongstar420

        No. The rich get richer and the poor get poorer….in relative terms that is. That is how it is. Today’s rich are more wealthy than any of what existed of previous empires. They exert much more power as well. Average people exert sparingly more than the historical norm (in world terms). The gap has widened compared to all historical standards. That is in fact the easiest thing to do. The reason for this conjecture is because that is what has in fact happened- the easiest thing, or should I say, “the path of least resistance” though it may not seem so from some people’s perspective.

        That is why charity or subsidy for the “worst off” is more word fare than actual results.

        And, if one is to posit an action, how would one establish reason without purpose and alternatively, how could an action be sustained over time with out a positive consequence? That is not to describe existence btw. It is a common misconception people make to think that human existence serves a “purpose” or that some existence exists for a purpose.

  • Damien S.

    Reading Pinker’s Better Angels and his description of Fiske’s relational
    modes, from Communal Sharing to Market Pricing/Rational-Legal,  I was
    amused to realize that utilitarianism and anarcho-capitalism share an
    abstract similarity as members of Rational-Legal: they both put a number on everything.  In one case
    market prices with explicit trades, in the other case estimates of utility or
    of welfare metrics like life expectancy with policy tradeoffs.  But
    everything ideally has a number and everything can be traded off or
    sold, in a universalizing context that dissolves or ignores distinctions
    like nationality. 

  • Pingback: Against Utilitarianism, abridged – Radio Attack

  • Pingback: James Bruce’s Critique of My Consequentialist Libertarianism: Part II | Online Library of Law and Liberty

  • Aaron Boyden

    I tend to think that utilitarianism does give the right reason for respecting individual rights, contrary to your point one.  First, I should say that I’m suspicious that there’s a meaningful distinction between rule and act utilitarianism, as any utilitarian should want to use the best strategies for bringing about the good, and the best strategies are usually going to involve following some kind of rules rather than wasting enormous time and energy trying and likely badly failing to calculate the exact outcome of some course of action.  So it seems to me that utilitarianism with rules is just utilitarianism.

    But I notice that people tend to think that strict, nearly exceptionless moral rules are the preferred approach to interpersonal situations, while in contrast in issues of public policy, we tend to think that decisions should be guided by considerations of the greater good rather than our usual strict moral rules (it is hard to reconcile any war ever with our usual rules on killing, yet few are pacifists, for example).  If one is a utilitarian, this appears entirely in order; the rules are an excellent strategy for interpersonal situations, but difficult to apply in a public policy context, and in the public policy context, stakes are higher and resources for actually taking the time and assessing outcomes are more available.  I do not think any alternative moral theory does a remotely comparable job of explaining why the two types of cases should be treated so differently.  So I think utilitarianism does a better job of explaining, say, why it’s wrong to kill innocents than any deontological view, because there actually are exceptions that most people recognize, and utilitarianism does an excellent job of picking out where the exceptions should be.

    Oh, and I’m a philosopher, BTW.

  • Paul Hield

    Rejection of Rawls’ claim that Utilitarianism could lead to
    slavery.

    Utilitarianism cannot be used to justify slavery for two
    reasons: Firstly, implicit in Utilitarianism is the notion of diminishing
    returns from consumption. Second is the notion that only an individual can
    judge what promotes his own well-being (utility) and therefore in order to maximise
    the aggregated sum of all such judgements each individual must be free.

    Taking the first reason. Slavery assumes that there is an
    owner and a number of slaves, the fruits of their labours are unequally divided
    such that all but the essentials for life accrue to the owner, under the
    assumption of diminishing returns this would not maximise the utility derived
    from the activities of the slave owner and his slaves.

    The second reason addresses a possible objection to the
    first, what if the slave owner was not intentionally exploitative but attempted
    through his own reasoning to allocate the fruits of their collective labours
    according to what he perceived to be in the best interests of him and his
    slaves. Utilitarianism would reject that arrangement on the basis that only
    individuals can know what best balances their personal set of preferences.

    It is worth noting that Utilitarianism does not require an
    equal distribution of wealth. It is true that if a group of friends whilst
    travelling came across a sum of money, then Utilitarianism would suggest, only
    through the assumption of diminishing returns and in the absence of any other
    relevant information, that utility would be increased by sharing the find
    equally amongst themselves. But that is only in the special case where the sum
    of money was obtained by luck and not through the efforts of any of the
    individuals. If the money was obtained by effort on the part of the
    individuals, Utilitarianism would require that the motivation to create wealth
    was also taken into account in arranging a system which aimed to maximise
    utility. So if strict sharing of all earnings was enforced why would any
    individual work at all if all he had to do was at the end of a day of other’s
    labours simply appeal to redistribution to maximise utility; that would lead to
    a system which did not maximise utility and would be rejected by Utilitarians.

  • Umunandi

    1) ‘Consequentialism’ and ‘utilitarianism’ are sometimes used interchangeably. Utilitarianism is usually distinguished from other forms of consequentialism by it’s impartiality (which is consistent consequentialism since a distinction between a consequence affecting one person or any other stems from a concern for something other than consequences). Utilitarians have different ideas about what kinds of consequences are desirable (I won’t go into why preference utilitarianism is not properly a consequentialist theory ), ‘utility’ can mean different things depending on the the utilitarian. Consequentialism is concerned exclusively with consequences when it comes to judging actions or decisions but not when it comes to judging motives or character.

    2) Utilitarians don’t disregard the separateness of persons, they only claim that the interests of separate people are commensurable. Besides, the universe is one inter-connected reality. Utilitarians respect the interests of all individuals *equally* which, taken to it’s logical conclusion, justifies sacrificing the interests of some for the interests of others when the benefit outweighs the cost and no other alternatives produce more benefit and/or fewer costs. The (hedonistic) utilitarian wants everyone to experience happiness and to be free from suffering but interests conflict and this is the consistent position for someone who cares equally about the well being of all individuals to take. If ‘collectivism’ and ‘individualism’ emphasize the interests of the group vs. the interests of ‘the individual’ respectively then the dichotomy is incoherent because a group is the sum total of the individuals who comprise it, to care about the group is to care about all of ‘the individuals’. If collectivism emphasizes the interests of the majority and individualism the rights of the minority then utilitarianism is neither collectivist or individualist. In some scenario, benefit to fewer individuals outweighs benefit to a greater number of individuals and vice versa. Utilitarianism is not concerned with the greatest good for the greatest number, this is a double maxim because maximizing benefit and maximizing beneficiaries are two different goals.

    3) Only hedonistic utilitarians regard pleasure and pain alone as good and bad and not all are moral realists who believe pleasure and pain are intrinsically good and bad. What all pleasurable states of mind have in common is that they are inherently likable and desirable (even if our desire for pleasure is overridden by other desires). Pleasure being intrinsically good cannot be logically demonstrated or refuted (I believe this is empirically proven by direct experience) but if anything is intrinsically good, it must be the only intrinsic good since fundamentally different things can’t be good by their inherent nature if, being entirely separate things, they have fundamentally different natures – this violates the law of non-contradiction. Moral consistency requires a monistic conception of value. Sadistic pleasure is intrinsically good but the psychology that allows it (a disregard for the pleasure and pain of others) is intrinsically criticisable and utilitarians should encourage empathy because empathetic people are both happier and *necessarily* (not contingently) more likely to maximize happiness in the long run. (Affective) empathy, or wishing happiness on others, only has inherent moral value if happiness has inherent value. A distinction between the happiness of humans and non-human animals is arbitrary and inconsistent and the idea of qualitatively higher and lower forms of pleasure is incoherent (‘higher’ and ‘lower’ are quantitative concepts, not qualitative, no shade of blue is ‘more blue’ than other shades, just darker or lighter).

    4) It is inconsistent to make a distinction between any one person’s happiness and anyone else’s and this is true regardless of whether or not happiness is inherently desirable. A credible moral theory has to be consistent, otherwise it’s as unintelligible as asking someone to go the store without going to the store.

Set your Twitter account name in your settings to use the TwitterBar Section.