Hi all, Don't know if anyone can answer but is it really possible to upload our consciousness onto a computer hardrive, and achieve Transcendence? I believe there was a movie with Johnny Depp, in the lead, that looked at this, but as I haven't seen it, I really don't know what treatment this topic got. Anyway I hope someone will take pity on me, and answer my question, because when it comes to Transcendence, its really the Elephant in the lounge room. Cheers Pasquale.

It's too bad that movie was kinda lame. But the idea is not. If one is a functionalist about mental states, including consciousness, then one believes that our mental states can be instantiated in any system that has the same functional roles as the functional states in our brain that instantiate (or are) our mental states. Functional roles are basically what the states do. What a clock does is keep time. A clock can be implemented by a digital device, a bunch of gears, or even sand or water set up in the right system (but would they be the same clock?). Functionalists think our desires, beliefs, sensations, emotions, pains, memories, etc. can be understood in terms of what they do--that is, the way they take input information, organize it, and interact with each other, to cause output mental states and behavior. Computers helped motivate this theory of mind, and if certain versions of it are right, then our mental states could be implemented in complex enough computer systems, presumably connected...

How can we be sure that dreaming is a real phenomenon? It seems like there is no scientifically objective way to know that a person is dreaming; the most we can do is ask them. We are relying on our own subjective experiences, which we cannot verify, and the words of others, which we cannot verify either. REM sleep is correlated with claims of dreaming, but mental activity isn't granular enough to figure out whether a person is in fact *experiencing* an absurd fantasy world rather than simple darkness. Is it possible to approach dreams and dreaming scientifically, if we have no way to examine or verify them or their existence in any way beyond subjective claims?

These are great questions. There have been philosophical arguments that suggest that it is impossible to know whether dreams occur while we sleep or are just confabulations we create as or after we awake (call this 'dream skepticism'). These arguments fail once we consider all the evidence and use abductive (best explanation) reasoning. When we wake people during REM, they are likely to report dreams. When we wake them during other phases of sleep, they are unlikely to report (or remember) dreams. When we record neural activity using EEG and now fMRI, we see activity that correlates both with the sorts of experiences reported by the dreamer and with the sorts of activity that correlates with similar waking experiences (fMRI cannot get at all of what you call the "granularity of experiences" but see the link below for an initial attempt). Etc. This body of data could be explained away by a dream skeptic, but that explanation would likely look ad hoc and fail to make predictions as good as the...

Is it a common view among philosophers that human beings are simply biological computers? Doesn't this view reduce philosophy of mind to solely neuroscience?

It is a common view among philosophers that human beings are biological entities--that, in some sense , our minds (including our conscious mental processes) are our brains (are based on neural processes). There are few substance dualists (who think the mind is a non-physical entity). But in which sense the mind is the brain remains a topic of great controversy (some fancy terms for the relationship between the mental and physical include identity, supervenience, and functionalism). It should not be controversial that information from neuroscience will inform debates in philosophy of mind. But it is unlikely that neuroscience alone will answer all questions about the nature of mind. Notice that just the way you phrased your question suggests complications. If we did think the brain were a biological computer (this view is one form of functionalism), then many of the details of neuroscience might turn out to be irrelevant. The interesting facts about computers are about their programs ...

Who are some modern philosophers that argue for either dualism or the idea that mind is a nonphysical substance?

By "modern philosophers" I am assuming you mean contemporary philosophers. (We philosophers use "modern philosophers" to refer primarily to European philosophers from roughly 1600-1900, and among that group there are a number of substance dualists, including Descartes, Malebranche, Leibniz, and arguably Kant). Among contemporary Western philosophers, there are not that many substance dualists, though it is making a bit of a comeback recently. Of note are E.J. Lowe, Richard Swinburne, and (I think) Alvin Plantiga. I am likely leaving out others. There is an even bigger resurgence of "property dualists", people who argue that the universe consists of just one kind of substance, but all (or some) of that substance has both physical properties and mental properties. David Chalmers played a big role in motivating this position. Recently, Susan Schneider (if I understand her correctly) has argued that you can't be a property dualist without accepting substance dualism. The dominant position in...

Why are people so skeptical about the notion that a sufficiently advanced computer program could replicate human intelligence (meaning free will insofar as humans have it; motivation and creativity; comparable problem-solving and communicative capacities; etc.)? If humans are intelligent in the way we are because of the way our brains are built, than a computer could be constructed that replicates the structure of our brains (incorporating fuzzy logic, neural networks, chemical analogs, etc). Worst comes to absolute worst, a sufficiently powerful molecular simulator could run a full simulation of a human brain or human body, down to each individual atom. So there doesn't seem to be anything inherent in the physicality of humans that makes it impossible to build machines with our intelligence, since we can replicate physical structures in machines easily enough. If, however, humans are intelligent for reasons that do not have anything to do with the physical structure of our brains or bodies - if there...

You have some philosophy questions in here and some psychology questions. The philosophical questions are about (1) whether a machine could ever replicate all human behavior (i.e., pass a "complete Turing Test"), and (2) whether such complete replication of behavior would entail that the machine actually had the mental states that accompany such behavior in humans (i.e., whether a machine's (or an alien!) passing such a complete Turing Test means that it is conscious, self-aware, intelligent, free, etc.). There's a ton to be said here, but my own view is that the answers you suggest are the right ones--namely, that there is no in principle reason that a machine (such as an incredibly complex computer in an incredibly complex robot) could not replicate all human behavior, and that if it did, we would have just as good reason to believe that the machine had a mind (is conscious, intelligent, etc.) as we do to believe other humans have minds. I think there may be severe practical limitations to...

Are dreams experiences that occur during sleep? Or are they made-up memories that only occur upon waking? How could one tell either way?

Good question, one that has been debated by philosophers (perhaps even psychologists?), and one that is answered nicely in Owen Flanagan's Dreaming Souls . You can get a glimpse of the problem on p. 19 found here but he gives the full answer later in the book (e.g., pp. 174-5). Basically, this question offers a nice case where we have to go beyond the evidence offered by our first-person experiences. We can't be sure, upon waking up, whether we had a dream a while ago during sleep or whether our minds are very quickly making up false memories that we experience as dreams. (We also can't be sure from our experiences how long our dreams last--Kant and others have thought they occur 'in a flash'. And we can't be sure whether our reports of our dreams accurately convey what we actually dreamed, assuming the dream experiences occurred during sleep.) If one assumes that our experiences are the only evidence relevant to answering such questions, then one may not be able to answer them. But...

Can dogs lie? Our dog will 'pretend' to bark at something outside the house when it is near time for her meal or she has not been for a walk. As she has other behaviours to get our attention, patting with her paw, staring mournfully, or stand over us on our lounge - she is a big dog - it seems she 'chooses' to 'lie' at times to get our attention.

Good question, and I think it has a lot of philosophical import. Here's why. What we might call a "true lie" is one where the liar knows what she is doing. She knows that she needs to do or say something to alter what her target believes in order to get him to do something the liar wants. Contrast this with a "behavioristic lie," one that has the effect of getting the target to behave a certain way but without the "liar" knowing how she is doing it. Take the case of a 3-year-old girl who has learned that saying "I'm tired" often gets her out of doing something she doesn't want to do. One night her dad says "It's time to go to bed," so she repeats her standard ploy, "I'm tired." She does not seem to know how her lie works! This difference between "true lying" and "behavioristic lying" seems to make a big difference. Behavioristic lying might not require any especially impressive cognitive abilities. Well, behavioristic learning itself is pretty impressive--and it allows more interesting and...

Can the mind "feel" things even though nothing has happened? If so how does this work? For example, someone swung a textbook at my head playfully, and even though he did not hit me, I still felt something where he would have hit.

The brain and nervous system "combine" information from different sensory modalities, so it is quite likely that when you visually perceive that you are about to be hit, other parts of your brain respond, including perhaps sensory systems that normally perceive pain in that part of the head and/or motor systems that prepare you to react to such a blow. There is a lot of interesting research showing that the same parts of the brain are active when you imagine performing an action (but don't perform it) as are active when you perform the action--sometimes you can start to feel your body doing something even though you don't move. Your situation might be sort of the reverse of this. The key is to remember that even though "nothing has happened" on the outside, lots can be happening on the inside--that is, in the brain, which of course, is the basis of our minds' feeling things.

How do thoughts interact with the physical universe? Our movements and actions seem to be simple responses to the signals from our brain, but what triggers those neurons? I mean, we –chose - to act. We think “do I want to do this, yes.” Then do it. How is that possible? If it’s possible for immaterial things like thoughts with no apparent location in the physical universe to interact with our neurons then why isn’t it possible for imaginary concepts to interact with other physical catalysts?

You are raising really interesting questions that philosophers debate under the headings of "mental causation," "theory of action," and "free will." One way the problem gets generated is by assuming, as you do, that thoughts (including decisions or intentions) are immaterial things. That's what Descartes said, and ever since, the main objection to his view is your question--how could an immaterial thing causally interact with a physical thing like the brain (and vice versa, since on his view the physical world sends information through the brain to the mind which consciously experiences it--how the heck could that happen?). The main response to this problem is by giving up your assumption of "dualism" and instead try to understand how thoughts and conscious experiences can be part of the physical world. That's no easy task. But one way to make the initial move in that direction is to see that the idea of non-physical or immaterial thoughts makes no more sense than the idea of physical...