Wednesday, February 24, 2010

What's the obvious thing to say, and what could I be saying that's not obvious?

Here's an obvious story to tell: Consider any old thing that you know. It's justified. Well, you follow justification up the chain, and you end up stuck with something that's gotta be basic. Well, what's our justification for believing that? Either it's experience (not a belief), a priori knowledge, or we just take it as basic (and then we had better figure out some way of distinguishing those beliefs from others). In response to the familiar regress problem we might have to deal with arguments against this way of understanding justification, but we may feel that our picture of justification has got to be foundationalist. And maybe the beliefs that we take as basic are the really really obvious ones. And then we argue that moral realism or something is really really obvious in the same way.

This is a simplistic story. And it wouldn't be write to ascribe this to anyone. But, for the sake of my own thought process, lemme ascribe it to someone who doesn't deserve this kind of simplistic treatment. We'll call him E. E tells a sort of similar story. All of our beliefs are generated by some basic belief-forming principles. How are these basic belief-forming principles (such as IBE) justified? This goes just one step above the previous analysis of the tree of justification. Meaning, take whatever our basic beliefs are, and say that we're not taking them as basic. Maybe because we're relying on experience directly, or maybe because we're relying on a priori knowledge. But that means that we have some method for forming beliefs, that takes experience or a priori intuitions and results in justified beliefs. These methods themselves need justification, though, and so now we're really left up the creek without a paddle. No matter what your solution, now you have the problem of those who simply accepted certain beliefs as basic. So how do you distinguish between the good basic beliefs and the bad ones? You tell a story about which basic methods are justified and which methods are not.

This could not provide ultimate justification, of course. Any first year philosophy student can see that this would only shift our problem to another method, another principle and another unjustified belief. I think that this is pretty obvious, that there is no ultimate ground, only relative grounds, and epistemic regress is unavoidable.

E's answer seems to be just to provide some belief that justifies IBE instead of IWE. That's great, and maybe we should take that as primitive instead of IBE, but how does that help, ultimately?

I think the answer is that it doesn't. So now I want to explain where there is room for someone to say something a bit different.
----------
One point to note is that the idea that we could be justified from top to bottom in all our beliefs is necessarily false. I argue this by pointing out the epistemic realism--a view whose falsity would result in there being no justified beliefs at all--cannot be justified. I then argue that whatever our most basic beliefs are, they similarly cannot be justified--after all, how would you justify them without some epistemic principles, and I am denying you even those at this point.

Then I argue that this doesn't spell doom at all for our cognitive enterprise, because these beliefs that cannot be justified are also not unjustified, that is, it's not like we think that they're wrong. It's just that there can be no reasons, either for or against.

Now, where does ethics fit into this picture? The typical picture is that moral realism might gain justification some where down the line. But, given this picture, the most promising thing to attempt is to formulate some sort of principle that can serve as the most basic ones, the kinds that can't be criticized for being unjustified since we don't have the epistemic resources to do so.

This needs care, for two reasons. One, there's a problem with taking particular beliefs as basic. I don't know what it is, but there is such a problem. Another is that epistemic principles can conflict and be unstable.

So, in sum, this is the picture I'm providing. Epistemology is an a priori endeavor, where we necessarily start by taking certain things for granted. These things that we take for granted are similar to intuitions, in the sense that the only reason we believe them is because they're obvious and available to us, and not because we think that there's a reason that they're true. In fact, at the very foundations we can't have reason to think that things are true, and so there's no way to criticize our taking certain things for granted. This means that, very quickly if we choose well, we start getting a system of epistemology, of what's justified and what's unjustified, what should we believe and what we shouldn't. And this is all for free, more or less, from those first things. We choose things when we can't be criticized for not choosing them (when they don't conflict). We get a lot, but we're sloppy and it's complicated, so we're still fighting over it. But the pressures of cooperation and living together force us to refine our system over time, and we've got it pretty down in practice.

(Note that my arguments show, I think, that if our most basic item of cognitive commitment is normative, we're gonna be in trouble, because eventually theoretical reason runs out of the resources from which to defend itself with. This is natural and untroubling.)

The question is, have we left things out of our picture? Maybe we left ethics out. Maybe we left religion out. Did we screw up? How could we tell if a basic belief doesn't work out? After all, at the very start of our cognitive adventures we have no epistemic principles, and no way to criticize you for believing anything at all. So what's to stop you from taking something as basic? Nothing possibly could. Well, should we just pack up and go home? No. We have to make decisions, and this is something that we might prefer to avoid, but it's not something that we can. (With a nod to that piece that I like by David Lewis) at a certain point we just pick between competing systems, and that's all we can do. So we can add a belief to our basic set and then note the troubles that occur, and then decide whether it's worth giving up the conflicting beliefs or the basic one.

How does this work out with ethics? We need to see if there are any conflicts with the rest of our beliefs. Well, there are, and these are the arguments for anti-realism in ethics.

But here comes epistemology and theoretical reason again to add an interesting twist. We started by trying to find a place for the moral norms governing practical reason inside theoretical reason. And then we noted that the foundations of theoretical reason are such that there are a number of blank spaces. And then we noted that ethics could be plugged in, but that it conflicts with much of the rest of our picture. Here comes an interesting suggestion: maybe our theoretical picture of the world sucks. How could that be? Suppose that we had an epistemic principle that said something like "Never ever believe anything without justification." That principle is self-defeating, since it would undermine our basic principles (that presumably lead to this principle) and epistemic realism itself! So that would suck. Maybe there are other epistemic principles that are behind our objections of moral realism that are simply not being sufficiently reflective to note that they're self-defeating.

How could this be? There are these arguments against moral realism. Do they actually also apply to epistemic realism?

Now, a note: epistemic realism is a VERY different belief than moral realism. From the perspective of theoretical reason, it's basically rock bottom, and that's what the above arguments show. So to argue that we could prove moral realism by parity arguments...that's just not going to fly. Another problem is that none of these arguments could actually be considerations counting AGAINST epistemic realism. It's unclear what they would be capable of showing at all in a discussion of epistemic realism (yes, they could show epistemic expressivism, but then you're in the peculiar position of defending expressivism against arguments that it falls into nihilism in order to defend moral realism?)

At best, here's what we could hope for: these arguments show us that we have permission to take moral realism as a basic principle, or something. This would involve clearing up the confusions about what ethics and epitemology require. And then we would have permission to take it as primitive.

So this is the lesson of Enoch combined with the lesson of Cuneo: Enoch tells us that if we could take ethics as basic that would rock. The lesson of Cuneo is, maybe ethics isn't all that much worse that epistemology.

No comments: