Saturday, December 26, 2009

What is non-naturalism?

I really don't have a firm grasp of this concept.

If one is a non-naturalist about ethics, then this means that one thinks that ethical properties are not natural ones. Well, what's a natural property? Attempt one: it's a property in the universe. Well that's not helpful! That just means that a non-natural property is one that doesn't exist (because I assume that the universe is just the class of all things that exist). OK, so maybe the natural properties are the ones studied by scientists. By that definition I suppose that a lot of properties that are physical could be non-natural ones. So maybe the natural properties are the ones that could be studied by scientists. What's the point of this definition; is the point just that scientists are able to study properties that can be found in space, are physical? So then all it means to be non-natural is to be an abstract object? That could be, but that doesn't pick out anything but abstract objects, obviously, so it's superfluous terminology.

I can think of two other definitions of non-natural properties. I need to read more so that I have a firmer idea of what it means. One definition would be that it is relevant for scientific explanations, or indispensable to explanations. This merges the natural properties with the ones that are derived from IBE, obviously. Again, not so interesting, but possible. The second definition is that the non-natural properties are those that we have to take as primitive. This won't quite do because then you'll have some physical properties that are non-natural. Grr. Not really sure what to do here.

Saturday, December 19, 2009

Philosophical Progress

Since philosophy folks are always feeling defensive about philosophical progress, lemme say something concerning that. I remember in the introduction to "Realism with a human face" that there was a quote from Putnam. It went along the lines of: "There are almost never answers to philosophical questions, but there are better and worse ways to think about the questions." This is one way that philosophical progress occurs: there are some deep questions that might ultimately be unresolvable, but there are better and worse ways of struggling with these issues.

For example, you might wonder whether God exists. And so you might start looking into believing in things that you can't see or directly observe, that kind of inference, etc. And then you can have a whole long debate about that. But along might come some philosophers and say "You're thinking about belief in God all wrong, because what it means to believe in God isn't what you guys have been assuming it means, and so your debate has been misdirected." In this sorta-inspired-by-maybe-real-philosophy scenario, there has been philosophical progress through a clarification of what is worth debating.

To ethics: I get the feeling that the philosophical community has been moving, for a while now, towards a broadening of debates about ethics that have lasted a couple hundred years. I don't have the kind of knowledge to really defend this, but one gets the sense that at a certain point folks realized that what ethics distinctive and queer--it's normative properties--could be found in lots of other things. It's routine to point out that this is true about epistemology and what justifies are beliefs and makes some better than the others. But it's also been recognized in philosophy of perception too.

So the sense I get is that progress has been made on problems of ethics, insofar as we've recognized that the queerness of ethics is really just it's membership in a larger domain, that of the normative. And now there's focus on the normative, and people tending to deal with the normative all at once. This seems to me a better way of thinking about ethics, as part of the normative realm, and an example of philosophical progress and an insight that the rest of the world is going to owe philosophers who spend years sludging through, trying to figure out what's going on. We'll end up with a much better understanding of what it means to be moral, and we'll have a better understanding of what obligates us, and this will lead to different ways of thinking about how we treat each other.

Wednesday, December 2, 2009

And we've moved on to another area of research!

Again with the shifting from area of philosophy to area of philosophy...but this shift is more natural than the previous ones, and it's been coming on for a while.

http://philpapers.org/browse/epistemic-normativity-misc

But this page seems like a potential gold mine of things to read and ponder about. In short, the questions I'm thinking about now are about the differences and similarities and interconnectedness of statements such as "you morally ought to do X" and "you epistemic ally ought to believe X". Right now I'm feeling out arguments that try to blur these, not sure how I feel about it yet.

Ah, philosophy of math! How I miss you so. Hopefully there will be a way to bring it back in.

Wednesday, November 25, 2009

Writer's block. OK, let's try this on the blog before trying to write it down again.

I have a draft due sometime before I go to sleep tonight, and I'm having trouble getting the ideas clear enough to write down. So here's a first draft of my first draft.

Moral realism is the view that morality is factual, sometimes true, and not fake (like, it's not mind-dependent, or it's not like we're just identifying morality with social norms and peer pressure). How can one argue for this view? One way is to try, somehow, to latch moral realism onto some other discourse that we feel much more comfortable about.

For example, here's a recap of Sturgeon's argument: if you're a scientific realist, you think that our scientific views are largely true. And so you probably think that inference to the best explanation, or something awfully close to it, is true. This tells us that unobservables like protons or electrons exist, and it might also tell us that numbers exist. Sturgeon argues that ethical beliefs are necessary for making our explanations better, and so inference to the best explanation should tell us that there are moral facts. Of course, one could just say "Feh, I don't like science either" but we're not taking up that argument. We're just trying to argue for the conditional, "If you accept science (then you accept inference to the best explanation, and if you accept inference to the best explanation) then you accept moral facts.

Let's say that fails. What other ways are there to approach an argument for moral realism?

In Enoch we saw another way. But what he's still doing, the rock of his argument, is scientific realism. Put it like this: inference to the best explanation is true, we know that from scientific realism. If you think that inference to the best explanation is true, then you believe that it is justified. Then Enoch provides a justification for IBE. Then he says that the justification he provided also justifies some new principle--inferring that what is necessary for deliberation is true.

I've spent time trying to argue that this approach is misguided. First, I think that it's easy to see how you could have conflicts between what's necessary for deliberation and what's necessary for explanation. What this comes down to is there being a problem with creating something like two epistemic realms--they're going to interact and cause problems (though maybe make things better too). Also, the argument gets off the ground by demanding that a basic belief, such as IBE, give itself a justification, but maybe that's an unfair requirement. Maybe we need to simply take some principles as primitive and unjustified, or maybe we need to be coherentists and not ask for a single justification for every belief. So Enoch's argument is sensitive to alternatives that seem more attractive than his account.

There's another account that came before Enoch's, but it's vague and not fully worked out (as far as I can tell). S-M suggests the following:

Still another reply, compatible with the first two but relying specifically on neither, shifts attention from science and from mathematics and logic, to epistemology itself. To think of any set of considerations that they justify some conclusion is to make a claim concerning the value (albeit the epistemic as opposed to moral value) of a conclusion. To hold of science, or mathematics, or logic, that there is a difference between good evidence or good arguments and bad ones is again to commit oneself evaluatively. This raises an obvious question: under what conditions, and why, are epistemic claims reasonably thought justified? Whatever answer one might begin to offer will immediately provide a model for an answer to the parallel question raised about moral judgments. There is no guarantee, of course, that our moral judgments will then end up being justified. The epistemic standards epistemology meets might well not be met by moral theory. But there is good reason to think the kinds of consideration that are appropriate to judging epistemic principles will be appropriate too when it comes to judging other normative principles, including those that we might recognize as moral. This means that any quick dismissal of moral theory as obviously not the sort of thing that could really be justified are almost surely too quick.


In other words, the epistemic realm is another normative realm. The question S-M is asking is what justifies our justifications? What's the second-order justification of justifications? Now, if we're just asking for more justifications, why do we think that we'll get anywhere new? Put another way, we're asking for normativity to provide the basis for normativity. I think the idea is that we'll get somewhere, and that since we're providing justifications of something normative, that will also provide a model of how to deal with a different normative realm, such as ethics. In his longer essay S-M suggests that it's conceivable that maybe there would even be something that justifies both our epistemic beliefs and our ethical ones. His example is a long shot, but it's possible, I guess. Enoch really seems to me to be operating in that model: trying to find what justifies our epistemology, and then reflecting that back on ethics.

There's another thing S-M wants to say, which is that if some argument would conclude that normative stuff isn't good in general it would be too strong. But there's a natural weakening of these arguments to give a prima facie reason not to believe in the existence of something. We can allow stuff into our ontology, even if it's problematic to do so.

OK, one thing out of the way: this probably gives a good argument against a specific interpretation of the argument from queerness. Specifically, if the argument from queerness gives us a reason not to believe in ethical stuff, it should also give us a reason not to believe in epistemic stuff. But queerness says "There's nothing else like it," but here we went and found something else like it.

Is there another way to go? I want to throw out the possibility that maybe the game's over once we have some normative beliefs, specifically epistemological beliefs. Here's one argument: being justified in something, we'll assume, has some aspect of normativity to it. And so here's one possibility. Let's say that we analyze justification as, "X is justified in believing Y" as "X ought to believe Y." And so epistemology is full of ought-statements about belief. Then the only thing that distinguishes ethics from epistemology is that ethics deals with a lot more actions that belief (assumption: belief is an action). But it's not just justification that's normative, because there are some actions that involve being justified. You know something only if you're justified in believing it.

Another way for the game to be over is if we showed that ethics was doing nothing more than justifying beliefs. "You ought to do X" could be reinterpreted as "You ought to believe that you ought to do X." And what would be wrong with that? Plenty. It's actually a remarkably bad argument. Moving on...

A third option is that ethics and epistemological norms are just quite different things. As different as ethics and etiquette. Well, then what? Then I guess we're back to Enoch and S-M's efforts. Which isn't necessarily a bad thing. It's just that Enoch is problematic (and I think the problems come from pushing epistemology too hard and by carving up our knowledge) and S-M doesn't really offer anything. So if there's anywhere to go, maybe it's by just saying "belief is an action, so why not?" but that's a pretty rotten argument too, because you could just as easily say "picking up your fork is an action, so why not?" Oy, still not getting anywhere.

Enoch on S-M

1.A.8 Sayre-McCord
In the last section of his (1988a) Sayre McCord presents – in just four pages – what is by far the most detailed and careful attempt at the second strategy I know of. Despite its lack of crucial details, this section clearly anticipates my line of argument for Robust Realism.
Sayre-McCord starts by noticing that the explanatory project – indeed, the explanatory requirement itself – is normative through and through, for it requires, at the very least, that we evaluate explanations, and choose the best one. He notes that there is little to recommend a view that is realist about some normative facts but not about moral ones, and so he concludes that our engagement in the explanatory project already commits us to evaluative, and so to moral, facts. There is just no way of engaging in explanation without relying on normative facts.
Sayre-McCord goes even further than that. He argues that the respectability of normative (and in particular moral) facts and properties does not depend on their indispensability to the explanatory project:
The legitimacy of moral theory does not require any special link between explanatory and moral justification. (280)

Instead, what guarantees the respectability of moral and other normative facts are their justificatory, not their explanatory, role:
Just as we take the explanatory role of certain hypotheses as grounds for believing the hypotheses, we must, I suggest, take the justificatory role of certain evaluative principles as grounds for believing the principles. (278)

This is, I take it, an explicit rejection of the first strategy of Harman’s Challenge, a rejection of the explanatory requirement, and the beginning of an argument for normative realism from a different kind of indispensability. Though Sayre-McCord does not use the term “indispensability argument”, he does often say that evaluative facts are indispensable (e.g., 279). And he suggests that we talk in this context, instead of an inference to the best explanation, of an inference to the best justification (ibid.).
Now, Sayre-McCord does not give some crucial details here (details which I try to give in this chapter and in the rest of this essay): What exactly does indispensability amount to? Why does indispensability to the explanatory and the justificatory projects justify ontological commitment? Why is there a justification-related need to invoke irreducibly evaluative facts and not just, say, psychological ones about one’s brute desires or preferences? Furthermore, in some important respects the line he seems to suggest is different from mine: For one thing, I cannot see how anything like inference to the best justification can be made to work . More generally, it seems to me the justificatory work normative facts do matters to us because of the deliberative indispensability of justification. What is intrinsically indispensable, in other words, is the deliberative project, not the justificatory one. The latter only matters because the former does. This is why I think the argument for Robust Realism is better put in terms of deliberative rather than justificatory indispensability.
Despite the lack of details and these differences, and despite Sayre-McCord’s commitment to Metaphysical Naturalism, it is clear, I think, that his suggestions anticipate – in broad outline, at least – my indispensability argument for Robust Realism.

Particularism versus Methodism

Sosa distinguishes between particularism and methodism. Methodism starts by outlining what the correct epistemological methods are, and then checking to see how much knowledge we have given those methods. Most people don't really do that, I don't think. Much more common is particularism in epistemology. This is a methodological starting point; you start with knowledge, and then you attempt to justify and understand what norms and principles give us that knowledge. It's not just an attempt to describe our knowledge, but that description is in fact normative in this case. If it turns out from our study of knowledge that people require observation in order to know stuff, then observation is normative, in the sense that you only ought to believe something if it's been observed. That's the methodology for a lot of epistemology.

You might ask, then, why not do the same thing for ethics? Say that we're assuming that there is ethical knowledge, and that we need to understand how that's possible. Then how we gain that ethical knowledge is normative. Given that we do the same thing for epistemology, how can we distinguish the two?

The obvious difference is that whether there is ethical knowledge is debated. But I suppose if one was quite sure, on a first-person level, that there was ethical knowledge, there's nothing to stop him for proceeding as we do in epistemology. But it certainly won't convince anybody. But it's not aiming to. Just as epistemology often says that it has no need to answer the absolute skeptic, a moral epistemologist could say the same thing. Of course, it won't convince the skeptic, but maybe one doesn't want to. This only works if you are REALLY sure that there is ethical knowledge, but given that someone is this certain then it's hard to think of a way to criticize their methods given that there's an exact parallel methodology in scientific/general epistemology.

Sunday, November 22, 2009

Metaethics and Philosophy of Religion

Another fruitful parallel? I like when philosophy does this. Here's Plantinga:

If a man believes that the star Sirius has a planetary system containing a planet with mountains over 40,000 feet tall, then if his belief is to be rational or reasonable he must have some reason or evidence for it. Similarly, it may be said, with the existence of God: the theist must be able to answer teh question "How do you know or why do you believe?" if his belief is to be rational; or at any rate there must be a good answer to this question. He needs evidence of some sort or other; he needs some reason for believing. Obviously this raises many questions. What is evidence? What relation holds between a person and a proposition when the person has evidence for the proposition? Must a rational person have evidence or reasons for all of his beliefs? Presumably not. But then what properties must a belief have for a person to be justified in accepting it without evidence? Is a person justified in believing a proposition only if it can be inferred inductively or deductively from incorrigible sensory beliefs? Or propositions that are obvious to common sense and accepted by everyone?


Math, religion, ethics all face the same challenge: if I can't sense it, can I really know it? The answer might seem to be no, but the idea that we only believe what we can sense fails horribly. So we try to latch on math, religion or ethics to those principles that guide us when we leave the realm of observation. In math there is the claim that this is enough for numbers (though Parsons raises the objection that is quite parallel to that raised by Enoch towards the naturalist Cornell realists about losing the special nature of math/ethics by resorting to the naturalistic argument). If ethics and religion fails this test, does that mean that they're out? Plantinga says "no" in the case of religion.

A philosophy of math thought

Almost without fail, when I talk to my math friends about whether mathematical statements are true or false I'm told:

"Well, it depends what you mean by true or false. Mathematics doesn't make any substantive claims, we just make conditionals. You start with some axioms, and then deduce from there. So all mathematical knowledge is conditional: if the axioms are true, then this deduction is true."


Why isn't this quite right?

First, this avoids answering basic questions about really basic math. Is 2+2=4 true or false? Can we only talk about it being true in Peano's world, or true in ZFC's world, but can't talk about it being absolutely true or false? What explains our choice to study the worlds where 2+2=4 is true instead of false? Is it just by whim? Can we then imagine living in a world where 2+2=5? Or maybe we pick the system because it matches up with our world. But doesn't that mean that we believe that 2+2=4 is really true? In that case we are committed to the existence of actual numbers. And if we want to maintain that there are no such things as number, we then have to explain why we study 2+2=4, and we have to explain why that statement seems to incredibly true about our world.

Second, it evades answering questions about the axioms. How do you choose which axioms to study? Why are we picking some axioms instead of others? If we're picking axioms that properly describe the geometry of earth, does that mean that we're studying something factual when we study that geometry? When it comes to set theory, and the foundations of things like arithmetic, we might be less willing to be wishy washy about which axioms are true. This argument is somewhat dependent on the one right above.

Third, Hilbert tried to be a formalist, but most folks think that he failed because there are tremendous tensions in trying to believe that we just work with a bunch of formal systems that are consistent when these systems can't prove their own consistency.

Explanation and Justification

I'm reading Peter Lipton's book on IBE right now, and it's a good read with nice chapters on induction and explanation.

If you think about it inference to the best explanation is a really interesting principle, if it's used for justification of inferences. Meaning, consider the following: I see water coming out of my wall, the best explanation of this is that a pipe has burst. Therefore I am justified in concluding that a pipe has burst. But what justifies this conclusion? IBE. So inference to the best explanation plays a justifying role. We then ask, "What justifies IBE?" Well, that is what I've been trying to do. Enoch and Shechter give an account of epistemic justification that allows for pragmatic considerations (of a very specific type) to play a role in justifying our basic principles, such as IBE. I think that this runs into problems that are pretty big, but it is a subtle argument with intuitive appeal.

Another attempt, which occurred to me, is to simply take certain beliefs as primitive. IBE is unjustified, or justified by things that it justifies. This suggests a coherentist picture of some sort. Enoch is decidedly working in a foundationalist epistemology, one that thinks that justification is linear and asymmetric.

What else is there? You might think that the notions of explanation and justification are simply linked--maybe the reason why inference to the best explanation is justified is because what you're really doing is inferring to the best justification, so that IBE is never leading you to believe something that isn't justified. But this doesn't work, since explanation is actually quite different from justification. This comes out well in Lipton. One way to see this is that we have a much easier time believing that that certain beliefs explain themselves or dont' require explanations, while it's much harder to come across beliesf that seem to justify themselves, or require no justification. Also, explanation seems to FAR outstrip knowledge. In fact, we only begin to seek an explanation of a fact after we are secure in our knowledge that the fact is true! So it's quite a substantial claim to say that explanation guides justification, as IBE seems to do (I think). Obviously, explanation cannot be all there is to justification. Is explanation a specific type of justification? Er, I'm confused.

Nearing the half-way point

It's about half way through this project. So how am I feeling about it? I'm having fun, I'm learning a lot, but I'm finding it really hard. Now, I don't mind finding it hard. If philosophy were easy no one would be doing it. The difficulty and seeming insurmountably of a question is almost definitional of a philosophical question. Easy questions don't get asked in philosophy. But that's all a consolation prize--I really would like to have gotten a much better or deeper understanding of something, and I don't think that I've really gotten there yet. If I keep on plugging away will I gain some new insight on the issues that interest me? Maybe yes, maybe no. There's no way to know.

Here's a disappointment: for now my focus is turning away from the intersection of math and ethics and focusing squarely on the intersection of philosophy of science, epistemology and ethics. We'll see how far that goes. But for now it seems as if philosophy of math has been asked to sit this one out. In retrospect, this was a long time coming, since the power of the indispensability argument is its claim to just be a plain-old boring scientific and inductive argument. What makes it so appealing in philosophy of math is that it seems to be completely acceptable to the scientist.

I'm going to keep on reading about philosophy of math on the side, wondering if maybe there isn't some way in which the mathematical platonist goes beyond inference to the best explanation in order to reach his conclusions. And maybe if ethical realism fails, ultimately we can tie mathematical realism to that sinking ship. Anti-realism tends to be far less interesting to me than realism, though. My deeper convictions are that math and ethics are clearly different than plain old empirical knowledge in some important respects, but that they're completely respectable arenas in which disagreements can be rationally voiced and where certain claims are true and others are false. But I'm going to be open here, and if I find an argument that pushes me towards anti-realism, I'll have to deal with that.

In the mean time, here's where the project currently stands. I don't feel too hopeful about where it is right now. I don't believe that I'm on the verge of finding something. But anyway, here it is: inference to the best explanation is a respectable principle. But what justifies inference to the best explanation? Do you need to fall back onto something normative in order to get the whole thing going? Is that any different from just saying that justification is a normative concept through-and-through? And if some sort of normativity is needed in order to justify scientific claims, how much normative theory creeps in? The question is analogous to one in math after the indispensability argument: so math is necessary for science, but how much math? In the same way, maybe some normative theory is necessary for science (in the sense that in order be justified we need normative theory), but how much normative theory? Does moral theory creep in?

I sincerely think that the answer is "no," but this is likely to be the topic of my half-way point paper that I'm writing this week. To be honest, I keep on hoping that I smell a trail that will send me in a slightly different direction. I think that this direction is absolutely fascinating, but I don't think that it's a very promising one for ethics. But here we go, anyway!

Tuesday, November 17, 2009

Looking at S-M again

So, S-M points out, and let's say that we agree, that some kind of evaluative facts are necessary for inference to the best explanation. Some kind of evidence is better than other kinds, some explanations are better than others. We have value.

Now, I think that it's relatively uncontroversial (though I could be wrong) that we can pass from evaluative facts to normative facts. If evidence is good, then we ought to believe it, it gives us a reason to believe something. I think all of these notions are exchangeable, but I'm just talking about what seems reasonable to me.

So then there are some normative facts, but they're about epistemic value, epistemic normativity.

So let's say that we can show that facts about epistemic value are indispensable to science (because belief in science means belief in IBE, and this commits you to a host of epistemic values that support and sustain IBE). Where does this leave us with ethical facts?

There's an analogy to be drawn from math. Math is indispensable to science, but not all of math. And so some math gets tossed for being fictional (higher set theory, let's say). And some math gets included in order to round off the actually indispensable mathematics. And some math you are committed to just because it is built off, constructed out of the math that's needed for science. It's part of the theory that's needed.

So, if epistemic normativity is indispensable to science (and that's not a given, and that claim needs to be made more precise) then what relation does ethics have to epistemic normativity? Is it part of the same theory (one unified theory of normativity?) Or maybe it supervenes on epistemic normativity--that would be odd. Not sure what that would mean. Or maybe it's needed to round off and make reasonable the epistemic normativity. Or maybe it's completely irrelevant.

Random thoughts

Hit a bit of a wall, trying to feel my way forward. Here's some random thoughts:

More problems with Enoch

Enoch writes, "Instances of inference to the best explanation are justified, then, because they are arguments from indispensability to the explanatory project, which is essentially unavoidable." In another paper, he writes "Employing IBE is needed for successfully engaging in the explanatory project, and this explains why we are justified in employing IBE as a basic rule in our thought."

So IBE is justified for the sake of our explanatory project. But does that allow us to use IBE when we are not seeking an explanation. Sometimes we seek knowledge because we're interested in explaining things. This is, quite plausibly, what is motivating much of scientific investigation. But don't we also seek knowledge for many reasons besides explanation? I want to know what's causing my ceiling to be dripping water. I employ IBE to infer that it's because there is a pipe that is burst. But my use of IBE is only justified for the sake of the explanatory project. Is it so clear that I am justified in using it in this context, which has nothing to do with explanation? This project is quite optional; I could just give it up. Shouldn't I only infer to the best explanation when failing to do so would undermine the explanatory project? A response could be that if I don't always infer to the best explanation, then I'm failing to truly engage in the explanatory project. But isn't that claim just the claim that I must be seeking explanations at all times? So let's refine what we're saying: it would seem inconsistent to employ IBE sometimes and fail to use IBE at other times. But what's at issue is not our use of IBE, but our evaluation of when such use is justified and when it is not. But maybe I'm still missing the point. Maybe once IBE is justified once, it's justified in every context. But this makes the justification of IBE mysterious. If we only need to justify IBE in certain contexts, why should we overextend ourselves?

Also, here's a way to present their argument:
(1) What we mean by epistemic justification is "being epistemically responsible."
(2) If we are in a situation where "if the method is not effective the relevant rationally required project is doomed to systematic failure" then it is epistemically reponsible, and hence epistemically justified, to believe that the method is effective.
(3) The explanatory project is rationally required, and is doomed to failure if IBE is not effective.
(4) We are epistemically justified in believing in IBE.

Sayre-McCord

Still another reply, compatible with the first two but relying specifically on neither, shifts attention from science and from mathematics and logic, to epistemology itself. To think of any set of considerations that they justify some conclusion is to make a claim concerning the value (albeit the epistemic as opposed to moral value) of a conclusion. To hold of science, or mathematics, or logic, that there is a difference between good evidence or good arguments and bad ones is again to commit oneself evaluatively. This raises an obvious question: under what conditions, and why, are epistemic claims reasonably thought justified? Whatever answer one might begin to offer will immediately provide a model for an answer to the parallel question raised about moral judgments. There is no guarantee, of course, that our moral judgments will then end up being justified. The epistemic standards epistemology meets might well not be met by moral theory. But there is good reason to think the kinds of consideration that are appropriate to judging epistemic principles will be appropriate too when it comes to judging other normative principles, including those that we might recognize as moral. This means that any quick dismissal of moral theory as obviously not the sort of thing that could really be justified are almost surely too quick.


What does he mean? Here's a quote from "Explanatory Impotence":

To take one (optimistic) example: Imagine that we justify believing in some property by apeal to its role in our best explanation of some observations, and we the njustify our belief that some explanation is the best available by appeal to our standards of explanatory quality, and finally, we justify these standards by appealing to their ultimate contribution to the maximization of expected utility. Imagine, also, tha thaving justified our standards of explanatory value, we turn to the justification for cultivating some moral property...we might justify these benefits by appeal to their maximizing expected utility.


The only way that I can make sense of this (given that his example is WILDLY IMPLAUSIBLE, as he himself acknowledges) is to be doing something like Enoch actually does. Enoch thinks that he's working in this spirit, I think, given what he says about S-M in his thesis. It's as good an explanation as any.

There's one argument in S-M that I want to dismiss. He writes "Once it has been granted that some explanations are better than others, many obstacles to a defense of moral values disappear. In fact, all general objections to the existence of value must be rejected as too strong."

Here's Field explaining why I don't like S-M's argument:

Justification is not an all or nothing affair. The belief in mathematical entities raises some problems which I and many others believe to be fairly serious. These puzzles provide reasons against the belief in mathematical entities, and to put it very crudely what we must do is weight the reasons for and the reasons against in deciding what to believe.


So we can have objections to the existence of epistemic values even if we decide, in the end, to embrace them in our ontology. And that means that those objections could still stand against moral values.

Other stuff

Explanation runs out well before justification does. There doesn't seem to be regress problems with explanation, because there are some statements that really really don't seem to need explanation. This is easier to argue for than the claim that there are statements that don't need justification. For example, does IBE need explanation? Is there anything mysterious about that? No, I don't think so. But it does need justification. This is a fairly obvious point, but one that I enjoyed making explicit. Also, note that being an explanation doesn't seem to have anything to do with justification. Only best explanations have anything to do with justification. I take this to be an indication that justification is a normative concept, but that has been argued for by others and needs more argument than I just gave.

An ontological argument for our best explanations being justified:
(1) One of our explanations is the best explanation.
(2) An explanation is worse if belief in it is unjustified.
(3) Belief in our best explanation is justified.

There's a bunch of flaws in here, but it's kinda cute. Wrong, but cute.

Friday, November 13, 2009

Weekend reading

Epistemology

Lycan's "Epistemic Value"
Enoch and Shechter's "Basic Belief Forming Methods"
Something on Coherentism


Metaethics

Sayre-McCord "Explanatory Impotence and Moral Theory"
Putnam's "Fact and Value" (though I've been warned it's not very good)


Philosophy of Math

Colyvan, The Indispensability of Mathematics

Summing up this week's thinking

This week I tried to bone up on general epistemology. Enoch claims that what justifies inference to the best explanation also justifies something like inference to what is indispensable for deliberation.

When I read Enoch, I came away with three complaints.

1. If you admit normative facts for the sake of deliberation, your explanations of the world will suffer. For example, you have to explain certain things about these normative facts. And maybe our best explanation of some phenomenon is that normative facts don't exist. So the projects of explaining and deliberating are in tension, then.
2. I didn't understand how his justification for inference to the best explanation was a good one. If it's not a good one, is there a better one, and does that better explanation include normative facts? Also, is it possible that we shouldn't try to justify inference to the best explanation, that it's a basic fact?
3. Is believing in normative facts really indispensable for deliberation?


In order to grapple with the second question, I started looking into what philosophers say about justification in general. There is a debate in epistemology about justification and its structure. Does justification ever run out? Do we have to simply accept so unjustified claims? Or maybe we don't, because there are certain statements that are justified without reference to some other belief. Or maybe all we need is a coherent world view, and our beliefs support each other.

Another issue is something that Enoch even brings up in his dissertation: his argument is only for the existence of some normative facts. But what if there are normative facts without there being moral facts? From what I can tell, some philosophers give accounts of knowledge and justification that are explicitly normative. In that case, all knowledge, including scientific knowledge, requires some normative claims. But does the fact that there are epistemic values imply that there are also moral values? Certainly not. Does it lower the cost of accepting moral facts, since we're already admitting some values into our ontology? Arguably yes. Though I think I can think of a pretty good argument (to be fair, I also think I saw it in Hartry Field) that this doesn't help us very much. Because if we are wary of including normative facts for reasons beyond absence of evidence, even if we're "forced" to accept normative facts for justification or knowledge's sake, that doesn't mean we have to be happy about it. And we still might try to minimize the appearance of more facts about the normative realm. (Though I sometimes wonder if we could just admit any old normative facts, and then have moral facts supervene on these any old normative facts.)

I'm emerging from the week excited and overwhelmed. There are big debates in epistemology, and I'm just starting to get exposed to it.

But how necessary is it for my project? It really depends where I go from here. I'm fairly convinced that Enoch's project runs into a bunch of problems, but they're interesting problems. I could try to give another argument that defends moral realism, trying to do what I think his argument can't do.

I'm also a bit frustrated that I can't bring this back to math yet. But hopefully, if I do find a way to defend moral realism that's novel and interesting, I can then turn to math and say "Does this work there?" Then hopefully I'll have something interesting to say about philosophy of math.

Already there's one difference between math and ethics that seems to make a big difference: math is thoroughly integrated into our scientific knowledge. But what I'm looking at now is whether (if not the ethical) the normative is equally as indispensable for the justification of our knowledge. That is, math is necessary for the expression of our knowledge, but values might be necessary for the justification of that knowledge. Whether one of those projects should be privileged is an interesting question as well (and that relates to my first objection towards Enoch).

Wednesday, November 11, 2009

Questions that I'm reading about right now

Here are the questions that I'm looking to understand better:

What is the structure of knowledge? Is every belief justified, or do we have some unjustified beliefs, or some beliefs that justify themselves? What is the status of inference to the best explanation--if it's justified, how is it justified? Note that Enoch's argument depends on there being a justification for inference to the best explanation.

If one believes that there are some normative facts, does that mean that there will be ethical facts as well? What are the costs of continuing to deny the existence of ethical facts in the face of the existence of some normative facts?

Is explanation more basic than our other needs? That is, if our non-explanation considerations require us to believe something, and explanation requires us to not believe it, does explanation win, and if so why?

Is there a way to use an indispensability/transcendental argument to defend moral realism? How does that relate to our defense of mathematical realism?

Tuesday, November 10, 2009

More on Sayre-McCord

Here's a bit of rambling (and by rambling, I mean bad) post:


Once it has been granted that some explanations are better than others, many obstacles to a defense of moral values disappear. In fact, all general objections to the existence of value must be rejected as too strong.


The argument seems to be the following: suppose that you have an argument against evaluative/normative claims. You say, "If values were to exist, then YYY. But XXX suggests that YYY is not the case. So values don't exist." Sayre-McCord is saying that such an argument would now be impotent, because we know that some values exist. Since some values exist, the argument fails.

Does this follow? After all, belief is not binary. Some things are reasons to believe something, and others are reasons not to believe. Our beliefs don't have to be clean and cut. We can say, "We believe IBE, believing in IBE requires that we also believe in the existence of some values. We do this despite the rather convincing arguments that suggest that such values do not exist." (Here I'm trying out an argument that I attribute to Hartry Field).

Now, clearly this doesn't always work. If the argument concludes that "No abstract objects could possibly exist ever ever" and then you believe that values exist in order to undergird IBE, and you also believe that values are abstract objects, then your beilefs are in conflict and one of them has to go. And if you are set on IBE, then it seems that the "no abstract objects" argument is going to have to go. But there are other, weaker more subtle arguments, that don't have to lose their force, I think. Think about the argument from disagreement. OK, actually that's a horrible choice because that's an argument tailor made for ethics and not for normative facts more generally. OK. Think about the argument from queerness. Say it concludes that if normative facts existed, then they would be unlike our normal objects, because they would have ought-ness built into them, they are intrinsically motivating or something. And say that we then, following S-M, conclude that normative facts about explanations exist, that values about explanations exist. Why do we believe this? Because our commitment to IBE forces this. But as long as your argument against normative-realism isn't absolute, and just raises the stakes of realism, I see no reason why you can't maintain that argument even after accepting normative facts about explanations.

To put it more clearly: as long as your argument against normative realism isn't definitive, it can be maintained even after accepting some normative facts. You simply say, "In the case of normative facts about explanations we have no choice, epistemically, to accept these values despite their queerness. But there's no similar consideration forcing our hand in morality/ethics, so we remain skeptical of normative facts about ethics because those normative facts would have to be queer."

Was queerness a bad choice? Once you accept that some normative facts exist about explanations, does that remove their queerness? On the one hand we are now saying that we accept abstract objects with to-be-done-ness built into them. But we were forced, kicking and screaming to accept those values. That doesn't mean that suddenly it became any more palatable to believe in them, I don't think it undermines the argument from queerness.

A new argument for moral realism?

Geoffrey Sayre-McCord has an article, "Moral Theory and Explanatory Impotence." In the last section he presents an argument for moral realism. Here's what he writes in his article on moral realism in the Stanford Encyclopedia of Philosophy. I find it clearer than what he writes in his article.


Still another reply, compatible with the first two but relying specifically on neither, shifts attention from science and from mathematics and logic, to epistemology itself. To think of any set of considerations that they justify some conclusion is to make a claim concerning the value (albeit the epistemic as opposed to moral value) of a conclusion. To hold of science, or mathematics, or logic, that there is a difference between good evidence or good arguments and bad ones is again to commit oneself evaluatively. This raises an obvious question: under what conditions, and why, are epistemic claims reasonably thought justified? Whatever answer one might begin to offer will immediately provide a model for an answer to the parallel question raised about moral judgments. There is no guarantee, of course, that our moral judgments will then end up being justified. The epistemic standards epistemology meets might well not be met by moral theory. But there is good reason to think the kinds of consideration that are appropriate to judging epistemic principles will be appropriate too when it comes to judging other normative principles, including those that we might recognize as moral. This means that any quick dismissal of moral theory as obviously not the sort of thing that could really be justified are almost surely too quick.


The claim is that inference to the best explanation** presupposes the existence of some facts about which explanation is best. Sayre-McCord claims that this means that we have to believe in some values. The obvious counter is to try to give an analysis of what it means to be the best explanation in terms of non-normative, non-evaluative language. For example, if I were to tell you that values exist because we know that there are facts about which baseball teams are better than others, you would have an easy way to counter this: "THAT'S not what we mean when we say that some baseball teams are better. Rather, we mean that they win more games, not that they are good, or that you ought to approve of them or something." In the same way, we could analyze what it means to be a better explanation and avoid any committment to values.

"The obvious response to this point is to embrace some account of explanatory quality in terms of, say, simplicity , generality , elegance, predictive power, andso on. One explanation is better than another, we could then maintain, in virtue of the way it combines these properties. When offering a list of properties that are taken to be measures of explanatory quality, however, it is important to avoid the mistake of thinking the list wipes values out of the picture. It is important to avoid thinking of the list as eliminmating explanatory quality in favor of some evaluatively neutral properties. If one explanation is better than another in virtue of being simpler, more general, more elegant and so on, then simplicity, generality and elegance cannot themselves be evaluatively neutral. Were these properties evaluatively neutral, they could not account for one explanation being better than another."


Now, under one interpretation this is a horrible argument. After all, what's wrong with the analysis of the baseball team above? Would we say that we're faking, that "winning games" is actually a value-laden attribute? Certainly not, it's dry, evaluatively neutral. So why can't we say that better only means simpler (and other stuff)? To put things more clearly: we're not saying that the best explanation is the simplest one. If that were the case, then Sayre-McCord would be right. Rather, we're saying that you should eliminate the word "best" and replace it with the word "simplest." If you remove the word "best," then there's no more value-ness hanging around inference to the best explanation.

Maybe, instead, he's just making the argument that SOMETHING needs to justify inference to the best explanation. He writes, "any attempt to wash evaluative claims out as psychological or sociological reports, for instance, will fail--we will not be saying that one explanation is better than another, but only that we happen to like one explanation more or that our society approves of one more." So maybe he's just asking what justifies inference to the best explanation as a principle. And even if you replace the word "best" with "simplest" you still need to find a way to justify the principle. And however you justify it will require you to say that "you ought to beileve the best/simplest/prettiest/most predictively powerful explanation." And that will need to be a value claim. Of course, this argument would need to be distinguished from the kind of ought-ness that we find in rationality. If I tell you that you ought to believe that 2+2=4, what I mean is that rationality requires it of you. Granted, there's some sort of normativity there perhaps, and that's another approach to take. But S-M is clearly not trying to make that argument. So he needs to be saying that there is no other good justification of IBE, I think.

In which case this argument is the argument: justification runs out at IBE, and you then need to accept some kind of evaluative fact that you can't otherwise justify in order to accept IBE. This is different than the claim that whatever justifies IBE also justifies normative facts (which is closer to Enoch). Rather, his claim is that if you chase IBE up to it's source, IBE needs to make some sort of evaluative claim.

You could avoid this argument, as I said in the last section of my paper on Enoch, by refusing to justify IBE. Justification needs to run out somewhere, and by giving a bad justification you, arguably, make your job too easy. And also, as Sayre-McCord notes, this is just a model and not an argument.

**Sayre-McCord actually seems to reject inference to the best explanation as a sufficient condition for belief-formation, but he thinks that it's at least plausible to say that it's necessary. Meaning, if something doesn't count in the best explanation, Sayre-McCord is willing to consider that its explanatory impotence would count against it being knowledge.

Saturday, November 7, 2009

Responding to some of a commenter's points

But is it necessary for numbers to exist for me to be able to use them in my explanations of the world? In physics, we talk about the world as if there were a big three-dimensional grid running through it, but it doesn't actually exist.

It's absolutely necessary for numbers to exist in order to use them in my explanations of the world...unless it's not. What I mean is that the burden of proof is on the "fictionalist" to show that belief in the existence of numbers isn't necessary for their use in science. This is because it kinda seems that mathematical truths make explanations better--our explanations of the world would be worse if we didn't have math and didn't think that numbers existed. But, hey, if you can give a convincing account of how science uses math in a way that doesn't commit you to their existence, power to you. Hell, you might even think of calling your book "Science Without Numbers."

I find your examples somewhat problematic, in that leaky pipes and protons are both objects in the physical world, which numbers clearly are not. I think part of the issue is figuring out what it means for mathematical or ethical things to exist, but I don't know if thinking about pipes will really help with that. You mentioned gravity in a previous post, and that's closer, but maybe something like color or the principles of musical harmony would work better? (Depends what parallels you want to draw.)

This argument doesn't follow through; it stops before finishing. Because it's obvious that there are differences between protons and numbers. Sure, protons are physical objects, and (according to most) numbers are not. But so what? Do only physical things exist? Can you argue that, or do we just assume that from the start?

Asking what it means for non-physical objects to exist is a fair point. Here's the way the indispensability argument for mathematical objects works: Whatever it means for something to exist, we do know that being an indispensable part of our best explanation is sufficient for us to say that something exists. And numbers pass that test. You still probably need some sort of account of what existence means (does it just mean that the facts are true independent of human thinkers? does it mean that we can disagree? does it mean that we can know facts about them? etc.), but that's not essential to the argument. Of course, you could come back and counter "Well then, I have a different principle for how to determine what things exist" and we'll have to see how that one fares.


The question is, would it matter if you actually observed a cat burning in the real world? If I made an animation of children burning a cat (even a really crude one), or just talked about burning a cat, you would have the ethical observation that burning cats is bad. It seems then more like a making aware of something that you already know, rather than something you learn from observation (it crucially doesn't matter whether or not anybody has ever burned a cat). In science, on the other hand, I can talk about flying pigs as much as I want, but you won't get any biological insight out of it. Ethics seems to only require thought experiments, which makes it much easier to claim that it is simply an analysis of human thought processes, rather than of the world "out there".


...unless that's not true. I think we should distinguish between intuitions and observations, both in science and in ethics. A physicist might perform a thought experiment and come to some conclusion about what is possible or not possible. A biologist could do that too. So could an ethicist, reasoning out what situations were bad. But what your claim depends on is that there is no role for observation in ethics--that's exactly what's at issue here. And I do recognize that observation doesn't play the role in ethics that it does in science. Like I've said, there are no ethics laboratories, and it doesn't make sense for a guy in a lab coat to run a few tests to figure out if something is ethical or not. At the same time, it does seem possible to me that through observing the world one would reach ethical conclusions that one wouldn't reach from the armchair.

For example, suppose that you've never seen a guy rip off a customer in business. You think about it, imagine it, and say "Hmm, that doesn't seem SO bad." And that's fine, until you witness some guy get ripped off. When you observe him getting cheated, you realize that there was cruelty where you didn't expect it, and you conclude that the action was unethical. Now, why doesn't this count as observation? "Well, if you had thought through everything carefully enough, you could've reached that conclusion from the armchair. You just needed to think a little bit more clearly, and then you would've realized that it's cruel." But that seems like an unfair standard to hold ethics to for a couple of reasons. First of all, maybe the cruelty was extremely non-obvious, and needed observation to show it to you. Maybe the cruelty comes from the knowledge the businessman has that through some complex transaction this will ruin his customer's financial life. Second, it seems that there are scientific observations where you COULD'VE figured it out from the armchair in a similar way. Maybe we should be told that if we thought carefully enough about our own way of learning language we should be able to figure out a perfectly good psychological or linguistic theory. But that seems unfair. But isn't that what we're asking of ethics--to rely only on intuitions and never on observational insight? But observation is helpful, just as helpful as it is in the sciences (well that's an exageration, but at least observation doesn't seem totally irrelevant).


My temptation as a non-philosopher is to view this as a psychological question--different people are satisfied with different levels of proof. I believe, for example, that Neil Armstrong walked on the moon in 1969. I also know people (otherwise normal and intelligent) who believe this is not the case. Clearly, we each believe ourselves to apply reasonable standards of proof in determining what is in fact the case. I don't know that this really transfers to math and ethics, but it might. (For example, in people's different reactions to the idea of triage in epidemics--some people can't get past the gut reaction, while others are willing to entertain the idea of trade-offs. There was a NYT article about triage today, which made me think of this.)


But there is a wholly separate question from the psychological one, which is the question of justification. We might be interested in human reasoning as something to understand and explain, but what about understanding what human claims and principles of reasoning are justified? That's the philosophical question. (I might as well copy and paste the opening paragraphs from Frege's "Foundations of Arithmetic," because I'm trying to make exactly the same point.)

Friday, November 6, 2009

Two questions

1) Are we able to give a good justification for inference to the best explanation (or any other principle that justifies our inductive inferences)? If we're not able to give a good justification, then how do we defend our choice to believe in inference to the best explanation? If we simply have to shrug our shoulders and take some things as primitive and given, then can we also take something else as primitive that would give us ethical facts? How can we do this while not also allowing facts about sorcery to gain validity?

(Note: what Enoch did was try to justify inference to the best explanation, and it turned out that his justification of inference to the best explanation also plausibly would include inference to what's necessary for deliberation, and this included some normative/ethical facts. So this question could be restated in the following way: is Enoch's the best justification for inference to the best explanation? It doesn't seem very good. If we can provide a better one, will it also justify ethics? If there is no good justification (which could be OK) then how do we prevent anarchy?)

2) What is the strength of the following argument: in order to believe in inference to the best explanation you have to believe that some explanations are better than others. This means that you have some criteria for what makes an argument best. But why should you believe in the best explanation? Implicit in inference to the best explanation is that you ought to believe in the best...nevermind, this isn't going anywhere. I'll come back to number 2 later.

Tuesday, November 3, 2009

What I'm reading about now

So there are three choices you can make when it comes to something like inference to the best explanation.

1) You can give a good justification for the principle: I'm still looking to find one. The problem with justifying inference to the best explanation is that it's pretty much trying to justify inductive inference (meaning, trying to justify my belief that since the sun rises everyday it's also going to rise tomorrow). Hume provided arguments that make it difficult to justify inductive inference, though we can do our best to describe our practice. Then you can check to see if your justification justifies anything else by the way, such as inference to things that are necessary for ethical reasoning.

2) You can give an unconvincing justification: One could complain that this is what Enoch does. I tend to agree.

3) You can refuse to give a justification: after all, justification has to end somewhere. You (probably) can't have everything justified. The chain has to run out somewhere (probably). But then the challenge is, how do you avoid mayhem? If inference to the best explanation is unjustifiable, then is there anything wrong with us taking other wild and crazy principles as primitives?

Sorting this out is one of the current challenges I'm trying to learn more about.

Another related challenge is trying to figure out whether one implicitly assumes some normative stuff when you accept inference to the best explanation. After all, aren't we presupposing certain values when we talk about the 'best' explanation? Aren't we presupposing our ability to deliberate and come to conclusions? Doesn't this mean that there are "oughts" hidden in our scientific talk, and if those "oughts" are OK and in our ontology, then does it help out for believing in moral "oughts"?

And how the hell can I bring math back into this (after all, my primary goal starting off wasn't to defend moral realism or to explore scentific anti-realism, but to understand math and ethics better through close analysis of realism/anti-realism debate. Is that goal still possible?)

Handed into Nickel about a week ago

I'll begin with a quick summary of Enoch's argument. Enoch is sympathetic to Harman's challenge to the normative realist. Harman argues that one only needs physical and psychological facts in order to explain normative observations. Normative facts play no (indispensable) role in explaining these observations. So normative facts are irrelevant for explanations of non-normative facts. Harman considers this a strong argument against the view that there are objective normative facts.

Although Enoch agrees with Harman that normative facts play no indispensable role in explanations, he argues that there are other ways to justify belief in facts besides indispensability to explanation. After all, what justifies inference to the best explanation (IBE) in the first place? According to Enoch's analysis, IBE is justified because of the "intrinsic indispensability" of the explanatory project. For a project to be intrinsically indispensable to me means that "I have no option of stopping (or not starting) to engage in it" (Enoch 34).

If this analysis is correct, however, deliberation seems to be just as intrinsically indispensable as explanation. Just as we have no option of generally stopping to explain what we observe, we have no option of generally stopping to deliberate when faced with decisions. Finally, Enoch argues that normative truths are indispensable for this deliberative project. Hence, we are as justified in believing in normative truths as we believe in protons, numbers, or anything else that whose existence we inferred from our best explanations. So, Enoch concludes that if IBE is a valid principle, then we are likewise justified in believing in normative truths.

Actually, Enoch believes something stronger than this conditional claim--he believes that we are unconditionally justified in believing in normative truths. But this claim depends on his thesis that he can ground "epistemic justification in pragmatic utility" (42), and he defends that claim in an unpublished manuscript. So I'll limit myself to the conditional conclusion (presented on p.43) that "the price one has to pay in order to reject normative facts is a denial...of the validity of IBE."

In the following analysis I'll raise and begin to develop two challenges to Enoch's argument.

II. Is the deliberative project in tension with the explanatory project?

According to Harman, normative facts neither harm nor help us in our explanatory project--normative facts are irrelevant to explanation. Enoch largely accepts Harman's analysis, but he counters that even if normative facts are irrelevant to explanation, their indispensability to deliberation justifies our belief in them. These normative facts are still irrelevant to explanation, but we have some other justification for introducing them into our ontology.

I am concerned that once these normative facts are introduced into our ontology for the sake of our deliberative project, they cause problems for our explanatory project. In other words, once we accept Enoch's argument we end up with a less-than-best explanation of the world. Our explanation of the world was better before we accepted the existence of normative facts.

If this concern is justified, then there would be a conflict between our deliberative and our explanatory projects. On the one hand, deliberation would be urging us to accept the existence of normative facts, and on the other hand explanation would urge us not accept their existence. We would then need to find a way to reconcile these claims on our ontology, and we might decide that the explanatory concerns override the deliberative ones, in which case we would no longer be justified in believing in normative facts.

Why am I concerned that these normative facts cause explanatory problems? After all, didn't Harman show that normative facts are irrelevant to explanations? This would mean that normative facts do not (help or) harm our explanations. So how can normative facts cause problems for our explanatory project? Harman's claim was actually more limited. He argued that normative facts do not help or harm our explanations of non-normative facts. But once we introduce normative facts into our ontology, they are part of the universe and might have features that need to be explained, just like any other fact might.

Once normative facts are introduced into our ontology, our explanatory project has expanded. We now will seek explanations of any peculiar features of these normative facts. Some of these explanations are easy to supply. For example, "How do we explain our knowledge that these normative facts exist?" has a known explanation; the explanation is that the argument that led us to believe in these facts is valid. But there are other features of these normative facts that will require difficult, hard-to-come-by explanations. For example, it's often observed that normative facts are queer, in the sense that they have motivation built into them, such that if they are true they are sufficient to motivate an agent to act. This makes normative facts quite different from non-normative facts. What explains the queerness of normative facts?

Note that I'm not claiming that entities with these strange features can not exist. That would be rehearsing Mackie's argument from queerness. Rather, I'm making a more modest claim, that these strange features require some sort of explanation. What's the difference between the arguments? An acceptable response to Mackie's argument would be that many of the objects that we normally believe to exist are metaphysically queer, and so queerness is not an obstacle to existence. But that response would be insufficient for my objection. My argument still requires an explanation of this queerness. If there is no explanation for this queerness, then there is something additional in the world that I am unable to explain, and so my explanatory project is worse off than it was before I believed in normative facts.

But isn't my objection based on a confusion? The discovery of the existence of any new object inevitably leads to more questions. This doesn't mean that, somehow, our explanation of the world is worse. Rather, we judge how strong or poor our explanations of the world are given the objects that we believe to exist. For example, suppose that we directly observed a new planet in between Jupiter and Saturn. This would mean that we would have to throw out a lot of our astronomy--all of our laws of gravitation would be wrong. So, in a sense, our explanation of the world is worse off once we know that there is this tenth planet. But this is clearly no reason to reject a planet that we've directly observed. So why isn't it the same when it comes to normative facts? We've recently discovered, thanks to Enoch, that normative facts exist. Once they exist, there are features of these facts that need to be explained.

The difference is that this tenth planet, if it were to exist, pulls its explanatory weight. An explanation of the world that didn't suppose that this tenth planet existed would be a worse explanation than one that did assert its existence, because the former fails to explain our direct observation of the planet. But in the case of normative facts, the facts do not pull any explanatory weight. And so explanation only suffers from their presence.

(There are other ways to respond. Is explanation really needed? What sorts of things need to be explained?)

If my argument works, then Enoch's argument becomes more complicated. In order to reach his conclusion he has to show that the deliberative benefits somehow outweigh the explanatory costs.

III. Does IBE need a justification?

According to Enoch, the proponent of IBE needs to provide "a reason for taking explanatory indispensability to justify ontological commitment" (29). Further, he suggests that any such reason will be unable to justify explanatory indispensability without also justifying deliberative indispensability.

But one may reasonably wonder whether justification might run out at IBE itself. Enoch recognizes that "justifications come to an end somewhere" (42). There are some things that we must accept without justification (unless we’re conformational holists, I suppose, in which case every belief gains has some justification in merit of being part of the overall theory). These primitive beliefs we will hold as true, yet we will be unable to justify them. Supposing we take IBE as such a primitive belief, one that we hold as true but have no justification for. This would seem to undermine Enoch’s argument. He argued that the justification for IBE is the same as the justification for inference to deliberative indispensability—IBE is justified if and only if inference to deliberative indispensability is justified. But if IBE is not justified, then deliberative indispensability is also unjustified.

The question then becomes, is there any reason to take IBE as a primitive belief that is not a reason to take inference to deliberative indispensability as a primitive belief? Now, since we’re refusing to justify IBE, reasons for taking IBE as a primitive are not reasons for thinking IBE is true. We will not be justified, in any sense of the word, when we believe IBE (but since we take IBE to be true, we will be justified when we employ IBE). Rather, we’re looking for a principle that guides our choice of where to stop seeking justification. I will not pretend to have such a principle. However, I can think of plausible candidates that would include IBE but exclude inference to deliberative indispensability in the set of primitive beliefs. For example, perhaps our principle will advise us to take as primitive only those beliefs that have some broad agreement already. We wouldn’t want our most basic beliefs to be too controversial. This is a bit dangerous as a principle, since IBE is itself somewhat controversial. But Enoch’s inference to deliberative indispensability is far more controversial than IBE. It wouldn’t seem prudent to take it as a primitive belief.

Enoch might respond that we are acting arbitrarily here, and that we are simply stacking the deck against his robust meta-normative realism, and that we are unjustified in distinguishing between IBE and Enoch’s own principle. And that is exactly right. The reason why this is a response to Enoch is because he claimed that he could force the proponent of IBE to accept a further principle of belief-formation, inference to those things that are indispensable to deliberation. But the proponent of IBE only needs to do that if he has a justification of IBE; if he has no such justification, because he’s taken IBE as a primitive belief, then there is nothing to justify Enoch’s principle. Then the question is whether Enoch’s principle is attractive as a primitive belief. Plausibly, it is not.

Explaining why Enoch’s principle shouldn’t be taken as a primitive belief is difficult; any principle for choosing primitive beliefs must either be justified or itself taken as a primitive. Eventually justification for these principles guiding the choice of beliefs to be primitive will run out as well, and we’ll be forced to take something as primitive again. At some point we will need to make some arbitrary choices. If Enoch’s argument is “Since we have to make some arbitrary choices eventually, why not arbitrarily choose to include inference to deliberative indispensability?” then I don’t think the argument is very convincing. Of course, since I’ve run out of justification, what I mean is that I don’t like such an argument, though I can’t justify my dislike of the argument. Perhaps I prefer to play things safe and not make arbitrary choices that will expand my ontology. Perhaps I am biased against arguments that would easily allow for principles that would force me to include sorcery into my ontology. One way or another, though, I don’t like this argument, though I can’t justify it.

The point of this line of argument, though, is that the anti-realist can avoid Enoch’s conditional conclusion by moving the discussion into the realm of the arbitrary, as opposed to the realm of the justified. By refusing to justify IBE, the anti-realist forces the burden of proof back on Enoch. Enoch must provide a way to add inference to deliberative indispensability to our set of primitive beliefs without opening the door to any old belief—even one that could justify sorcery—being included in the set of primitive beliefs.

Sunday, October 25, 2009

Inference to the best explanation and skepticism

Inference to the best explanation is a really interesting philosophical topic.

Let's start the story with Descartes. Skepticism becomes an option--what if we're all being deceived? What if our eyes and ears are lying to us? Descartes tried to answer this question by starting with some firm knowledge and slowly justifying almost all of our knowledge from a few secure facts. In this way all our purported knowledge would be justified, and we would be allowed our confidence again.

As the story goes, not everyone agreed with Descartes secure truths. So other attempts at justifying our knowledge, at securing epistemology, began. Locke and Hume tried to justify human knowledge by matching all knowledge with observation. But, famously, this only gets you so far. So empiricism doesn't work to give us complete, skeptic-free confidence in our beliefs.

(Here I fudge the story a bit.) So what do you do now? Is it a free for all just because we don't have an answer to the skeptic? No, it's not. What we do is acknowledge that justification needs to end somewhere, and as it turns out justification for our beliefs ends before the point where we would feel totally secure in our beliefs. So there isn't any answer to the absolute skeptic. But that means we need to figure out where to stop.

So the game becomes trying to build your house as close to the cliff as possible. You lose if your house falls off the cliff, and you also lose if your house ends up looking like a mess. That is, the game is to find a starting point with as few assumptions as possible (satisfying a general desire for parsimony in the absence of justification) and building and justifying as much of our common-sense and scientific beliefs as possible. You lose if you end up with skepticism, and you also lose if you end up justifying silly beliefs, like beliefs in fairies or witches.

Right now I'm reading a paper where someone tries to say "We don't have to stop the trail of justification at inference to the best explanation. We have a way to justify inference to the best explanation that makes sense." As it turns out, that way of justifying inference to the best explanation is designed to allow for other kind of "inferences" besides inference to the best explanation. Specifically, it wants to allow for inference to the only things that make deliberation possible, which would include normative facts.

As far as I can tell, the debate about this paper has to be (obviously about the details of his argument, he needs to show lots of little points in order to get his big points into play) about whether he tried to go too far back. If his stopping point is no better than Inference to the Best Explanation, I think he'll find many people saying "Hmm, yeah that's interesting. I'm going to build my house right over here on ground that's a bit more secure, right here with Inference to the Best Explanation."

[His response will be, but building your house over there you still haven't made deliberation possible, but that's the second half of his debate. If the first point is dependent on the second, then he's got a very different argument on his hands.]

There's also a debate (that I know nothing about) where people wonder if inference to the best explanation is the right place to stop for reasons that have nothing to do with ethics.

Also, an observation: there are no arguments for mathematical or ethical realism that are particular to math or ethics. That's because realism itself isn't restricted to math or ethics. You win the realist game if you define plausible methods for coming to physical truths, and then use those same methods for reaching mathematical or ethical truths. This is exactly what inference to the best explanation/indispensability arguments do.

Monday, October 12, 2009

A weird idea about reductionism

What is reductionism? Here's an example of a reductionist approach--a reductionist approach to metaphor. You start with a sentence that seems kinda weird: "The city is a jungle; you've got to take care of yourself." So, on the surface this sentence seems to be quite similar to sentences where you identify two things, or at least apply some property to some subject, EX: "Michael is that guy" or "The eagle is a giant bird." If you classify "The city is a jungle" with those sorts of sentences you get really screwed up results. What I mean is, if you take "The city is a jungle" literally you'll either think that I'm classifying the city in the category of jungles--cities are jungles--which is just stupid. Obviously, that's not how you're supposed to read "The city is a jungle."

Quite clearly, what we need is to analyze what "The city is a jungle" means. What it really means is "The city is similar to the jungle in certain ways [e.g. it's dangerous and complex]." The point is that you start with some weird way of speaking that we don't think that we should take literally, and we reduce it to a level of discourse that we're more comfortable taking literally. This is what I just did with metaphor.

A more philosophical example: We start with the notion of causation, and we think that it's mysterious and confusing (Hume thought this). He doesn't think that a literal understanding of causation makes much sense, so he reduces it to a more down-to-Earth notion. So he takes a sentence "A causes B" and translates it as "Whenever B happens, A happens too." He's reduced discussion of causation to a discussion of correlation, the occurrence of two events at the same moment.

People do this to ethics too (Harman recommends that one has to do this in order to maintain ethical realism). "Saving lives is good" sounds a bit spooky, and it's not clear what things in the universe ethics is talking about, and if it's anything it would seem that it would have to be abstract objects or properties that you can't see, smell, touch, etc.... There's a bunch of problems. So people say, "Well, let's take a reductionist approach to ethics; we'll translate ethical statements into normal ones that we feel more comfortable with." So this, for example, can mean that ethics gets translated into the language of emotions--after all, we all agree that people's emotional reactions exist, and so everyone should feel comfortable talking about that. For example, you might reduce/translate "X is good" or "Y is bad" into "I like X" or "I don't approve of Y." (That would be called "emotivism", and it's a reductionist approach to ethics). If you're willing to translate everything into the language of emotional responses, then you no longer have to say that there's anything special about the language of ethics; truth in ethics is just truth about evaluating the way you feel. There are no ethical facts, only physical facts, and in particular, physical facts about emotional responses to situations and actions.

I wonder, though, what if we were to try to reduce physical facts into ethical ones? That's a bit of a loony idea. But suppose that we felt comfortable with ethical facts and uncomfortable with physical ones. What's stopping us from trying to reduce physical facts to ethical ones? If it works in one direction, it should be possible to do it in the other.

How would that project go, though? We would need to find an ethical translation of all statements that refer to physical objects of properties! Let's take an example, "There is a zebra eating grass behind the barn." Now, could we simply translate this as "It's good that zebras eat grass behind barns"? Of course not, for a bunch of reasons. First, because it's not really a translation--we're still referring to zebras, grass, and barns, and this means that I'm still committed to the truth of some physical facts. Second, because a reductionist approach works when you can capture what is meant by the original sentence (for the most part) in the translation. There will be lots of situations when we would want to say, intuitively, that there is a zebra is eating grass, but we wouldn't always want to say that that's a good thing. Come to think of it, while the second is true, the first problem I mentioned is the real problem.

So, what's the reason behind this failure? Part of the problem is that there aren't any particular ethical objects that we can employ. The language of physical objects is very rich, and the language of ethical objects is quite poor. So what could we do to correct this? Maybe expand our circle out from ethics, and let's employ all talk of values (this will include discussions of beauty, simplicity, etc.). It's still no good, I think, and I'm willing to diagnose the problem in the following way: value language has a lot of predicates, concepts, properties, but very few objects.

(So there's a few complications to what I wrote. First, for naturalists there are only scientific facts, so to ask whether I can translate physical facts into ethical ones is a question that wouldn't make sufficient sense. Also, I'd have to show how this argument works in math. Could we begin to talk about reducing physical concepts to mathematical ones? Probably not, but why? Is it because the language is so sparse. So then what's my point here?)

The absence of ethical objects in our everday ethical manner of speaking is a complication that I run into in my research. The indispensability argument in math concludes that mathematical entities exist; it's not clear that anybody really wants such an argument to work in ethics. We want the objectivity of ethics, but ethics-talk usually involves applying predicates/concepts/properties to physical things--acts, deeds, states of the world, rules, whatever.

I think that this is a more precise way of saying Harman's argument. The reason why ethics seems to be dispensable, the reason why our best explanation of the world doesn't need it, is because it doesn't make reference to any ethical objects, just ethical concepts/predicates. And part of the reason why math seems indispensable is because we're making reference to mathematical entities.

Earlier I posted about why existence--ontology--should matter to us. After all, even if we have an argument that concludes that numbers exist we won't start suddenly bumping into numbers on our way to the library, work or school. Our lives will be the same; it's our perspective on the world that is liable to change. And I put out the following idea: the reason why existence is interesting is because ontology seems to secure semantics. We know that some sentence can be true or false if it refers to objects that actually exist--if it's actually talking ABOUT something in the world. So ontology becomes a handmaiden of semantics. Some philosophers disagree with this; they think that it's interesting to investigate the world to settle the question of what exists and what doesn't. I'm not sympathetic to that view. We know what it's like to live in the world, and if we're positing the existence of acausal, abstract objects that shouldn't really change my life. But it still seems to matter whether stuff exists or not, and I argued that the only reason that I knew of was because if X exists it seems that it could be true or false to say stuff about X.

If what I've posted is right, then there might be very different problems facing philosophers of math and metaethicists. Philosophers of math need to secure the objectivity of math by saying that mathematical objects exist. But ethics doesn't really make reference, in general, to ethical objects. They just apply ethical predicates to regular, normal physical objects. So the question of objectivity in ethics might be the kind that our form of the indispensability argument can't really touch.

Thursday, October 8, 2009

Summing up the way I've been thinking so far

I'm about to dig into a more substantial phase of research now. Over the next few weeks I'm going to be trying to understand what an explanation of the world is, and what the differences are between science, math, and ethics in this regard. So before I do, I want to clarify what I'm going into this research with, what my hypothesis (of sorts) is. I can't really defend this view; it's just a starting point, a bunch of suspicions that I have.

Without a doubt there is something that is quite different about ethics, science and math. Pinning down exactly what is really the challenge, and the next challenge is trying to figure out what those differences should mean--why the differences matter.

My general hypothesis is that hard, strong, epistemological lines tend to break down under stress. For example, Quine convinced most of the philosophical world that there is no strong, philosophically useful distinction between the meaning of words in a sentence and the substance of those sentences; there's no strong dichotomy between analytic sentences and synthetic ones. The argument comes down to just applying a great deal of stress to the dichotomy, and watching it collapse under investigation. I expect, coming into a philosophical investigation, a similar thing to happen when we've set up boundaries between realms of knowledge. Facts/values, empirical/non-empirical, a priori/a posteriori, history/science, objective/subjective and of course science/math/ethics are all ways that people have tried to divide up realms of knowledge. The problem is not dividing up realms of knowledge itself--everyone does that because there are real differences between the areas--but when it comes to epistemology, how we know what we know about these realms, I expect the dichotomies to break down. I expect that the way we gain historical knowledge isn't very different from the way we gain scientific knowledge. I expect that the way we gain mathematical knowledge isn't so different from the way we gain scientific or ethical knowledge. That's my hypothesis, in general. It's heavily influenced by Quine and Putnam (and Dewey, through Putnam), I think. At the very least, it's influenced by a misinterpretation of Quine and Putnam.

So, in particular, I expect that the distinctions between how we gain ethical knowledge and how we gain scientific knowledge break down under pressure. I expect that all knowledge is more or less in the same boat. I suspect that the differences are all of quantity, and not quality of knowledge. So I suspect that any attempt to explain why scientific stuff is objective and ethical or mathematical stuff is not eventually breaks down. I think that the indispensability argument goes a good way towards showing how the distinction breaks down between math and science, and I suspect that something similar can be adapted for ethics. I think that there are at least two good, promising ways of doing this, but in the end every way of making the argument does it by trying to make science a little bit more modest.

But I started by saying that there are real differences between math and ethics and science? It's just a fact that there are no ethics laboratories in universities, and it doesn't strike us as a very good idea to start such labs. Why is that? An analogy from math is useful in clarifying the question. Once Quine/Putnam argue that math is actually empirical knowledge and as objectively known as science, they need to answer the question: what fooled people for so long? Why did people think that math was a priori and divorced from experience? So, if you argue that ethics is on the same par as science, you have to explain why ethics strikes people as being the sort of thing that it doesn't make sense to start a lab for.

Of course, you could say that people are wrong, and that we really should be building ethics labs of some sort. Some people--I think that I heard this in the name of Nagel or Parfit--think that ethics is just a young science, one that's bound to develop the way that other sciences have. So maybe these people think that opening ethics labs makes sense. But I'm more sympathetic to the position that there is something about ethics that makes such a notion strange. I guess this could be consistent with the view that we should open labs, but I guess the view I find most attractive is that ethics is really really really hard. (This is also the reason why I'm not swayed by an argument from disagreement that ethics is subjective. Disagreement is consistent not only with subjective views, but also with really really really hard ones.) I bet that I could even show that some things that eventually fell into the realm of science were considered subjective problems, ones that it wouldn't make sense for a lab to study. For example, I bet a lot of brain stuff fits this pattern.

How does this relate to the original program of contrasting math with ethics? Well, the idea is that by taking an argument that's found in the math literature, and seeing how it holds up under ethics we'll be able to see what math and ethics have in common. And my guess is that math and ethics can be both shown to be close to science when it comes to epistemology. And that the differences in the way they have been considered have to do with how hard/easy studying the subject is (math is more objective cuz we're able to isolate variables, ethics is REALLY hard cuz there are so many variables, math is a priori because it's an essential part of the web of beleif, ethics barely seems like knowledge because it's so hard to get secure on it, etc.).

Ethics and Observation, Harman



(I'll explain the cat picture soon enough.)

I'm having great difficulty trying to pin down the difference between the role of observation in ethics and science that Harman describes (in "The Nature of Morality"). Not sure why, but I'm just unable to state the difference between ethics and science with regards to observation in any clear way. Anyway, here's my attempt to formulate it. Hopefully this will help me get closer to understanding it.

Harman's thesis: Observational evidence plays a role in science that it doesn't play in ethics. Specifically, observations can provide evidence for scientific theories, but observations can't provide evidence for ethical theories. Ethics fails to meet the standards of science, then.

So how does observation work in science? Harman begins by telling you how observation doesn't work in science. You might think that science works like this: you, the scientist, observe a new species of animal: looks kinda like a cat, kinda like a horse. It might be tempting to think that you're getting an usullied picture of the world when you make this observation. Like you're just downloading a bunch of data into your brain. But that wouldn't be quite true. Philosophers and psychologists know that this isn't the way that people perceive the world; there's a lot of your preexisting beliefs that go into your perceptions. Put another way, how you think about the world has a lot to do with how you see the world. For example, you need to know what a "box" is before you could possible perceive a box. For another example, suppose you experienced something totally unlike anything that you had experienced before. Would you be able to describe it? So perception is more like receiving processed data than it is receiving straight data. All of our data gets processed in the process of observation.

So does that mean that we should be skeptical of our observations? Why should we think that there is anything behind our observations, if our mind and preexisting beliefs color the way that we look at the world?

The answer, for Harman, is that we have good reason to believe that our observations are true because of inference to the best explanation. What is the best explanation of your observation? And by that I mean, what's the best explanation of the fact that you had the observation that you had? Well, let's list some of the possible explanations of the fact that you had the observation that you had.

(a) You were hallucinating, causing you to have the observation of something that seemed real.
(b) Your theory, your preexisting beliefs, colored the way that you observed the world. What you really saw was something that didn't have the kind of animal you observed, but you interpreted it in that way because of your theory and beliefs.
(c) You actually saw something in the world that looked the way you thought it did.

The best explanation is the third one. So in order to explain the fact that you observed something, we need to infer that you actually did observe something. It's inference to the best explanation.

Now, clearly this is right, but I'm not sure if it all adds up the way I'm describing it. What makes (c) the best explanation? Is it the simplest? What does simple mean? Does simple mean only one sentence long? Is it simpler to assume that you were hallucinating or simpler to assume that you actually saw a new species? By simpler, do we just mean "more likely to happen to a person"? So we assume that people see real stuff all the time, and from that we reason that the best explanation of a phenomenon is that you actually saw something? But that's gonna end up being a bit circular, because what we're interested in knowing is what justifies the thought that we're not hallucinating during observation.

Let me move on to ethics. Ethics, Harman says, is quite unlike science when it comes to observation. So, having told us how science works, we should be able to see that ethics doesn't work that way. Let's give it a shot.

So, Harman discusses the example of an ethical obseravtion. You're walking down the street, and you see a bunch of kids burning a cat (his example, not mine!). You immediately come to the conclusion "It's wrong to burn a cat." Now, you didn't necessarily believe this before you saw it. It might be that life never afforded you the opportunity to consider the case of a cat lynching. So we can call this a full-fledged observation of an ethical fact. Of course, of course, your pre-existing beliefs about what's right and wrong factor into your observation, but that doesn't matter because (as Harman argued above) the case is the same for any observation, including a scientific one. Every obseration is processed through the brain's machinary before coming to your consciousness, whether it's an ethical observation or a scientific one.

Now, in science we said that we have reason to believe that scientific facts are true because they are necessary for providing the best explanation for the fact that you had the observation that you did. Now, let's try this for ethics. What is the best explanation for the fact that you had the ethical obseration that you did, that you observed that it's wrong to burn cats? Here are a couple options:
(a) You actually did perceived something in the world that appears the way that you observed it, that is, you actually managed to perceive/see ethical properties in the world, the same way people observe that a ball is blue or that a tree is tall.
(b) You were "hallucinating." Your experience was that of an observation about something real in the world, but actually it was your brain and beliefs doing all the work.

Around here is where I get stuck. The idea is supposed to be that the best explanation of the fact that you had an ethical observation is (b). And so inference to the best explanation doesn't require any ethical facts to exist. But isn't this just to assume what what we were trying to prove? Oy. Need to get back to this.