I have a question about Cartesian skepticism. One of the premises of the argument is something to the effect of: (1) I don't know that I'm not dreaming. My question is: What justifies this proposition? My intuition is that the evidence for (1) cannot possibly be empirical; for the upshot of the skeptical argument is precisely that all empirical claims are dubious. (Maybe it's helpful to rephrase (1) as "It's possible that I'm dreaming," if that is legitimate.)

You write "One of the premises of [the skeptic's] argument is something to the effect of:I don't know that I'm not dreaming." And yes, as you imply, it would be rather odd for a skeptic to start by being too dogmatic about what he can or can't know! But perhaps he doesn't need to be. Perhaps it is better to think of the argumentative situation like this. You are cheerfully going about your business, thinking that you know perfectly well that you are seeing a computer screen right now (and the like). Your friendly neighbourhood skeptic then issues a challenge: how do you know that? You appeal to the evidence of your senses. Your friendly skeptic chides you: how do you know they don't lead you astray all the time? Perhaps it is all just a vivid dream. Or perhaps an evil demon is making it seem as if you are seeing a computer screen when you aren't. In modern dress: perhaps you are a brain in a vat, being stimulated by a mad scientist so you still think you are embodied and seeing a computer....

"In expanding the field of knowledge we but increase the horizon of ignorance” (Henry Miller). Is this true?

No. OK, the following more prosaic thought is true: increasing our knowledge can reveal new areas of ignorance. Before you discover Australia, you don't know that there is a wild continent still to be mapped. Before you discover that there are protons, you don't know that there's a question of what happens when we smash them together in a particle collider. And so it goes. But the fact that increasing our knowledge can reveal new areas of ignorance obviously does not imply that our new knowledge (as far as it goes) isn't knowledge after all. You can come to know Australia is there even though you've not mapped it all yet. So it would be fatuous to aver (as Miller seems to do) that expanding the field of knowledge is nothing but increasing our ignorance.

Scientists often say (rather diplomatically, I think) that science cannot rule on the question of whether God exists. But is this really true? I suppose that some people might hold God's existence to be evident a priori; but I don't think that most religious people actually think this way.

Discussions of the status of theological claims can suffer from a restricted diet of examples. It is worth remembering that lots of theological claims are in fact uncontroversially true or uncontroversially false, and their epistemic status (and their relation to science) is pretty clear. Take, for example, the claim that Zeus exists. I take it that no one now reading this site believes that that theological claim is literally true! But why? Not, I'm sure, on the basis of fancy philosophical arguments. Yet rejecting the existence of Zeus surely isn't irrational prejudice either. For the existence claim is bound up with a range of stories about how the world works; and we now know the world just doesn't work that way. Mount Olympus is not populated with gods; bolts of lightning are naturally caused discharges of electricity; clouds and rain are not gathered by supernatural agency; burnt sacrifices to Zeus do not increase the chances of better crops or victory in battle; and so it goes....

Courage is considered a virtue and is defined in the dictionary as "lack of fear". How can "lack of fear" be a virtue?

The dictionary, then, is a bad one. Courage is not a matter of lack of fear. It is a matter of not letting even justified and appropriate fear stand in the way of doing the right thing -- such as rescuing your injured friend from a burning building, standing up for the innocent man in the face of the baying mob, refusing to betray the whereabouts of the resistance fighter. Not to feel fear of the fire (or the mob, or the Gestapo, or whatever) would be a sign of a kind of reckless madness, not of virtue: the virtue of courage comes in knowing when it is appropriate to let the fear guide your actions and when you have to master it -- and being able to do so.

Where on earth did Philosophers get the idea that "just in case" means "if and only if"[1] instead of "in the event of"? I ask just in case there's a legitimate reason for the apparently willful muddying of language! [1] for example http://www.askphilosophers.org/question/2290

I recall someone sending me a short paper complaining about the linguistic tic of using "just in case" to mean "if and only if" when I first started editing Analysis 20 years ago. So, rightly or wrongly, this has been going on for a while! But note, we can't grammatically substitute "in the event of" for "just in case" in e.g. "I'll buy some tofu just in case some guests are vegan". And the latter doesn't mean the same as "I'll buy some tofu just in the event that some guests are vegan" either. The first, on my lips, means that I'll buy the stuff anyway, so I'm prepared: the second means I won't buy the stuff unless I really have to.

If I had a device that could manipulate people's wants (like make them want to give me free money for no reason) would that take away their free will?

A footnote to Eddy Nahmias's very helpful answer. What should we learn from all the complexities of the debates which he touches on? We could say: The ins and outs of the debates just go to show that our concept of "free will" is a very complicated and sophisticated one, although one of which it is difficult to command a clear view. We need to do more careful analytic work to explore how this pivotally important concept works. But another line is: We can now begin to see that talk of "free will" muddles together quite a variety of different things we might care about (such as the capacity to act on our desires, the capacity for self-control, having desires we reflectively identify with, absence of interference by others, etc.). There isn't a unitary concept here, and undifferentiated talk of "free will" isn't very helpful. Some of us incline to the second line, taking our cue from e.g. Daniel Dennett's provoking and characteristically very readable 1984 book, Elbow Room: The Varieties of...

Since one's own reasoning is a basically set of rules of inference operating on a set of axiomatic beliefs, can one reliably prove one's own reasoning to be logically consistent from within one's own reasoning? Might not such reasoning itself be inconsistent? If our own reasoning were inconsistent, might not the logical consistency (validity) of such "proofs" as those of Godel's Incompleteness Theorems, be merely a mirage? How could we ever hope to prove otherwise? How could we ever trust our own "perception" of "implication" or even of "self-contradiction"?

This question raises a number of issues it is worth disentangling. It is far from clear that we should think of our reasoning as "operating on a set of axiomatic beliefs". That makes it sound as if there's a foundational level of beliefs (which are kept fixed as "axioms"), and then our other beliefs are all inferentially derived from those axioms. There are all sorts of problems with this kind of crude foundationalist epistemology, and it is pretty much agreed on all sides that -- to say the least -- we can't just take it for granted. Maybe, to use an image of Quine's, we should instead think of our web of belief not like a pyramid built up from foundations, but like an inter-connected net, with beliefs linked to each other and our more "observational" beliefs towards the edges of the net linked to perceptual experiences, but with no beliefs held immune from revision as "axioms". Sometimes we revise our more theoretical beliefs in the light of newly acquired, more observationally-based, beliefs...

Pages