Can a person who was born blind know what "red" looks like? Is there any way you can explain it to him/her so that he/she can perceive it the way we do?

There are two different, but related, issues here, on neither of which is there universal agreement among philosophers (but, then again, is there ever?). First, there's "Molyneux's problem": Can a person born blind who later gains sight distinguish a cube from a sphere merely by sight (assuming the person could distinguish between them by touch)? There's some empirical evidence that the answer is "no". The psychologist Richard Gregory has investigated this. But closer to your specific question is the philosopher Frank Jackson's thought experiment about "Mary", a color scientist who lives in a completely black-and-white world but who is the world's foremost expert on color perception. She has never experienced red. Would she learn anything if she experienced it for the first time? I.e., is there anything "phenomenal" to the experience of red over and above what physics can tell us? Jackson originally argued that there was, i.e., that Mary would learn something from the experience of red,...

I remember reading that Descartes considered animals as nothing more than automata incapable of experiencing pain because they do not possess "souls" (define that!). Viewing this favorably you could say he was an intellectual living in a rarefied world of his own, simply a product of his age. Less intelligent people of his time, however, liked, say, dogs and understood that if you kicked them and they howled and ran away then they were experiencing pain. Was there something the matter with Descartes and his view of animals if he couldn't make this simple connection, so clearly cognate with the human experience of pain? I know Hume had problems with causation but surely not in such a painfully obvious empirical manner!

Actually, it was because Descartes thought that animals lacked language and reason that he believed they were mere automata. (I say "mere", because we need to leave open the option, supported these days by, e.g., Daniel Dennett, that we are automata!) As for "experiencing pain", we need to distinguish between actually feeling pain and (merely) exhibiting pain behavior. An automaton can do the latter; whether or not it can also experience pain is a separate question. Consider a computer outfitted with a pressure-sensitive device connected to its operating system and an operating system that can be in one of 3 states. It begins in the "super-user-friendly" state and greets me with "Hi Bill; what can I do for you today?". I ask it to open my word processor so I can edit my philosophy essay on Descartes. It says, "Sure thing! Here ya go!". I edit for a while, and then hit its pressure-sensitive device very hard. This causes it to go into its "normal" state: When I exit my word...

What grounds the truth of logical inferences such as modus ponens or hypothetical syllogism? Are these logical truths grounded in "intuition" similar to Foundationalism?

I hate to sound like, well, like a philosopher, but I think we need to get some terms straight before we begin: Logical inferences such as modus ponens (more properly, rules of inference) are neither true nor false. Truth and falsity are properties of things like sentences, statements, or propositions (depending on your ontology). The analogue of truth values for rules of inference are "validity" and "invalidity". Very roughly, a rule of inference is valid if and only if it is truth- preserving . That is, it is valid if it will only allow you to infer truths from truths--if you input true propositions and apply a valid rule of inference to them, you will output only true propositions. (If you input a false proposition, anything can happen; "garbage in/garbage out" as they say.) So, a slightly more accurate way to phrase your question is this: "What grounds the validity of a rule of inference?" And now, I think, the answer is clear: Just as something like "correspondence to reality...

I would appreciate some recommendations on texts (for a layperson -- a nonprofessional philosopher) whose subject is the philosophy of science.

And I'll chime in with my favorite: Okasha, Samir (2002), Philosophy of Science: A Very Short Introduction (Oxford University Press), which I think is a terrific survey and lives up to its title of being "very short". I'd also agree that it's probably best to look at a survey such as the Stanford Encyclopedia of Philosophy article, or something like Okasha's book, before diving into one of the classics.

If a person were to be a created, a virtual reality person (such as a character in a Sims game, that "reacts" and "grows"), and this person was "downloaded" into an actual body, is that person considered "real?" Were they real before the download, or is a physical body part of the conception of real? Would you even be considered a legitimate person, since all of your "memories" could be considered "fake"?

Downloading such an avatar, assuming it were possible, would probably not result in a "real" person because such an avatar would doubtless be less "complete" than a real person. There are two other discussions besides Velleman's that you might find interesting: Pollock, John L. (2008), "What Am I? Virtual machines and the mind/body problem", Philosophy and Phenomenological Research 76(2):237-309, online at http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Virtual-machines.pdf and a terrific science-fiction novel by a philosopher: Leiber, Justin (1980), Beyond Rejection (Del Rey Books); out of print, but available on amazon.com

Can one learn to be rational? How would this be done?

One can certainly learn what rationality is: You can take courses in logic, in probability and statistical reasoning, etc. And you can study the limits on rationality: Much work has been done by cognitive scientists on "bounded rationality" (the work of Nobel-prize winner Herbert Simon on methods for making rational decisions in the presence of incomplete, inconsistent, or "noisy" information and within strict time limits), on errors in probability judgments (e.g., the work of another Nobel-prize winner, Daniel Kahneman, and his late colleague Amos Tversky), and on reasoning errors based on incorrect "mental models" (P.N. Johnson-Laird). Whether any of this can teach you to be rational will largely depend on how much of it you take to heart and practice in your daily life.

Pages