Friday, March 5, 2010

From Scanlon: "Why should this challenge be so difficult? Samuel Scheffler has suggested one possible answer. What 'lies at the heart of consequentialism' he says is 'a fundamental and familiar conception of rationality that we accept and operate withint a very wide range of contexts.' This is what he calls 'maximizing rationality.' 'The core of this conception of rationality is the idea that if one accepts the desirability of a certain goal being achieved and if one has a choice between two options one of which is certain to accomplish the goal better than the other then it is, ceteris paribus, rational to choose the former over the latter.'"

In context, this is a disscussion about a well-known objection to deontological ethical views. The accusation is that there is a maximizing principle of rationality, that if one accepts something as a goal then one should maximize the occurrence of that goal and minimize anything that obstructs that goal. And Scanlon offers a response to it. So goes that discussion.

But I'm interested in something else, and that is the intuitions that crop up in this discussion. I think that there are clearly some times when being moral requires us to confer value on states of affairs, and this is what drives the "paradox of deontology." (Scanlon resists the idea that we always accept some goal whenever we accept some reason or confer value on some action). But there clearly is a real sense of some things being morally right that does involve a maximizing rationality. The idea that you should do something a little bit wrong in order to do something very good has great currency in ethics for a lot of people. Maybe it's wrong to lie most of the time, but when a human life is at stake the equation changes. It then becomes right to do the bad thing in order to do a great deal more good.

Let's talk about epistemology. Is there anything like this? Sorta. There is a sense in which, when taking a third person perspective on the matter, you can say "we should invest in science education because more good will come out of that, even though in doing so we'll take away resources from math. Even though we value people having knowledge, we will be able to maximize the potential for knowledge by focusing on science education." That works.

How strong is the parallel to ethics, though? Here's something that doesn't seem to me as if it should work in epistemology: Say that it's empirically true that a person will do a better job at believing true things and disbelieving false things if they believe that there is a secret ghost that is in their brains and gets very very upset at them whenever they have a false belief. The myth keeps these kids on their toes. Now, should we apply the maximizing rationale here, and say that there's a fine epistemic trade-off going on: we've given up one false belief for the benefit of far many more true beliefs? This seems very very wrong.

I think that the ethical parallel is much more plausible. We very well might teach a person that they should be willing to harm one person, as long as the good they can do outweighs the bad that they've done. This is the burden of consequentialist theories. But my point here is not that consequentialism is true in a way that it's not for epistemology (I have no idea). My point is that there certainly are some cases when it would be right to do something bad for the benefit of doing far more good. But in the epistemic case my sense is that we can never justify believing some falsity for the sake of believing far more many truths.

Now, this might just mean that having true belief isn't really a goal of epistemology. But really? It sure seems like a crucial goal of epistemology is ensuring that believers have true beliefs.

What could explain this difference between epistemology and ethics, the difficulty of applying a maximizing rationality to epistemology?

Either the maximizing rationality isn't really a rational requirement, epistemology doesn't involve commitment to some epistemic goals or aims, or something besides the state of affairs of having true beliefs is the aim of epistemology. I think it means that we're not taking on any goals in epistemology, unlike the case in ethics where sometimes we do value states of affairs. Epistemology is not about the value in certain states of affairs--I think that's what this means. Does this relate to the other ways in which epistemology is unlike ethics? Not sure. Sure hope so, cuz I need 20 more pages.

No comments: