Uncategorized

Doing Philosophy Means Being Wrong, but with Style

Context: Michael Huemer claims that the “great” philosophers are usually bad thinkers. They defend implausible ideas with bad arguments.

Vallier responds that the great philosophers are like architects. Their great achievement is that they build coherent systems of thought.

I’m not much convinced by Vallier’s response in part because, when I studied the history of philosophy or read papers in the field, it seems that the “greats” often have incoherent systems. A large number of published papers on the greats, and good number of the classes, take the form of “Great Thinkers says X here and Y here, but X and Y are seemingly incompatible. Let me try to figure out a way to spin X and Y to render then coherent.”

Anyway, I’m rather pessimistic about philosophy in general, not just the value of the studying the history of philosophy or of the greats.

Look through the PhilPapers Survey results here. You’ll notice that for most interesting philosophical debates–the kinds of issues that would draw you into philosophy in the first place–philosophers are fairly evenly split. If there are three major positions on some question, you see roughly a third taking each position.

At most, one of these positions can be true. For some debates, the positions do not necessarily exhaust the logical space, and so it might be that none of the major theories are true. Further, the positions are defined broadly (e.g., all communitarian political philosophies are lumped together, as are all forms of theism), so even if one of the major positions listed is correct, that doesn’t mean that most of the people who subscribe to that position know the truth. If theism is correct, but it turns out that Odin is the one true God, it doesn’t seem to much vindicate Christian and Muslim theists that they got “theism vs. atheism” right. If Rossian pluralism is the correct moral theory, it’s hardly a victory for the Kantians that they got the deontology vs consequentialism debate right.

It seems, then, that studying philosophy is unlikely to induce you believe what’s true. After all, after studying philosophy and becoming a philosopher, only a minority believe any given major position. This means the majority by necessity believe something false. If there are three options, A, B, and C, with 1/3rd believing each, then by necessity at least 2/3rds are wrong. If you were assigned to A, B, or C at random, you’d have at most a 1/3rd chance of getting the right answer. Of course, you aren’t exactly assigned at random, as I’ll discuss below.

Now, one might object that even if most philosophers end up believing something false, perhaps studying philosophy at least tends to move people toward the correct position. That’s at least logically possible. After all, suppose A is the correct answer to some philosophical question. Imagine that before studying philosophy, only 1% of people believe A, but afterward, a whopping 22% believe it. Studying philosophy helps, even though it doesn’t make it more likely than not that you’ll believe the truth.

However, this is still merely hypothetical. To know whether philosophy at least “helps” in this way, we’d need to know what the correct positions are. We’d then need to compare treatment vs control groups to see how studying or writing philosophy professionally changes their beliefs. We’d have to control for confounds. We’d then be able to see whether and how much philosophical study has a positive influence. However, since we don’t have the philosophical answer sheet next to us, we can’t quite do that sort of research.

Further, we have good reasons to suspect there are pernicious biases and other improper factors which tend to affect what people believe. If you read the literature on bias in political thinking, it seems likely that people join political tribes for non-cognitive reasons, and that the more ideological members of those tribes simply try to rationalize whatever random things the tribe endorses. Further, we have strong incentives to share the beliefs of those around us. It’s not obvious philosophers are particularly good at overcoming political tribalism.

Second, what people believe tends to depend a great deal on who their advisors were. People who go to Harvard tend to come out Kantians of some sort. People who go to Arizona tend to come out Gaussian contractualists or Schmidtzean pluralists. Now, some of this is due to selection–the Kantians are more likely to apply to Harvard than, say, consequentialist ANU. Part of it, though, is that when you attend a program with people who defend X, you encounter much better arguments for X and weaker arguments for other positions. But this seems to a rather unreliable mechanism for changing your beliefs. A Guassian contractualist like Kevin would have ended up believing something else had he gone to a different program. Is it just lucky for him he attended Arizona and not Harvard? Is it just lucky for him that he had Gaus as an advisor instead of Christiano, Schmidtz, Wall, Pincione, or someone else?

Third, there are some probably selection effects, where people who believe certain things are more likely to specialize in a given subfield, and perhaps end up dominating it. For instance, most philosophers are atheists, but most specialists in philosophy of religion are not merely theists, but adherents to Abrahamic religions. It could be that studying philosophy of religion intensely causes you to realize not merely that some god exists, but that the real god is Abraham’s. Or it could be that Abrahamic theists self-select into that specialty and have succeeded in capturing the field and its journals. This it turn can tend to decide who ends up not only getting to become a specialist in those fields, but what the texts say and what ideas people encounter.

Fourth, in philosophy, one way to improve your status, get published, for get a job, is not to conform to what others think. Another is to instead come up with novel and exciting arguments on behalf of what others already believe. Philosophy, to its credit, still respects challenging the status quo and still sees itself as having the job of challenging conventional belief. (this is one reason philosophy is still better than some of the deeply corrupt fields, such as history.) Accordingly, we have incentives to come up with clever arguments for surprising and unconventional conclusions. People who do so tend to end up believing what they write. But neither background influence–the push to conform or the push to challenge–seem particularly reliable.

Imagine you have two goals: A. Believe the true answers to interesting philosophical questions. B. Avoid believing the false answers to those questions. If you value avoiding falsehood as strongly as you value believing the truth, it seems the best strategy would be to try to remain as agnostic on philosophical questions as you could possibly be. What’s the answer to Newcomb’s Problem? Answer: I don’t know. Is the mind material? I dunno. What’s the best systematization of ethics? Dunno.

Share: