I read in Clarke's "2001: A Space Odyssey" about the Turing test. Is this a good test for whether a thing is conscious?

According to the Turing Test, if you have an extended email conversation probing to see whether your interlocutor is a person or a machine and you eventually decide it is a person, but it turns out to be a computer, then we ought to say that the computer is intelligent. Taken as a test for consciousness (probably not Turing's own intention), there are two important considerations in favour. One is that the test provides a neat way of avoiding any prejudice against computers on the grounds that they don't look human. The other is the thought that at the end of the day our best evidence that other people are conscious may be their intelligent linguistic behavior. There are also two important considerations against the test. One is that there seems no reason to say that the computer couldn't fool us without being conscious. The other is the thought that my confidence that other humans on conscious might depend on my knowledge that I myself am conscious and that you and I have a similar...

I heard about the analogy of a computer and the mind, but I'm fuzzy about the connection. Please help!

One attraction of this analogy stems from the distinction between hardware and software (program) for computers. Computers are physical things, but the same program may run on physically different computers, so the states of the program are not to be identified with particular physical states. Instead, it seems that program states are to be understood 'functionally', in terms of their causes and effects, which may in turn be other program states. What makes the analogy attractive is the thought that mental states might also be functional states. Thus the same kind of thought might be 'run' on or 'realized' in different physical states on different occasions, just as the same program might be run on different types of computer hardware. One attraction of this idea is that it seems to capture the intuition that mental states are not simply identifiable with lumps of matter, while avoiding any suggestion that they are spooky non-physical stuff.

Is it just a philosopher's presumption to think the referent of the 'because' in a statement like, "He did that because he wanted to" is a causal connection?

'Because' is often used as the connective of explanation, and a great many of the explanations we give are causal. But not all: explanations in pure mathematics and at least most philosophical explantions are not causal, but are still given with a 'because'. So the appearance of 'because' in your example does not in itself show that desires are causes of actions (though I think they are). By the way, here is a non-causal explanation I particularly like. Suppose a bunch of sticks are thrown in the air, so they spin and tumble as they fall. Now take a snapshot of the sticks before any of them hit the ground. More of the sticks are near the horizontal than near the vertical. Why? The answer is because there are more ways for a stick to be near the horizontal than near the vertical. This is a geometrical not a physical fact, so it is not a cause, but it provides a lovely explanation. (To see that it is a fact, think of a single stick with a fixed midpoint. How many ways can it be vertical? (Tw0.)...

A discussion with a philosopher friend got me all bewildered. He claimed that we cannot say that animals feel pain, because a mind is necessary to feel pain, and animals don't have a mind. My argument was twofold: 1. How do we know that animals don't have minds? 2. Pain is a result of stimulus to certain parts of the brain. If we assume that animals don't have minds, we can still see that their brains respond to pain stimuli the same way as ours. Even if they are unable to cognitively translate an external factor into a thought train like "I stuck my hand on a hot plate, it hurt, so I removed my hand from the hot plate", surely we can watch them pull back from things that we would experience as painful. I was wondering what your thoughts are on this subject. Thanks.

I know of no good argument for the conclusion that animals cannot feel pain, and given the behavioral and physiological similarities between us and some animals the evidence seems very strong that some do. A biologist friend of mine told me about an experiement with, yes, rats. These rats had severe arthritis, a condition very painful in humans. They were given a choice between plain water and water laced with a tasteless drug (tylenol, perhaps) that does nothing to improve the arthritis, but in humans reduces pain. The rats quickly came to prefer the water with the pain-killer. This is no proof that rats feel pain, but it is a telling argument. And remember that you have no proof, in the strong sense of that term, that people other than yourself feel pain either.

Many philosophers seem to believe that belief is involuntary. But if this were the case, wouldn't it be true that every human being, when presented with the right information, would automatically assume a certain belief? So when person A and person B are presented with information Y, the will always comes to believe X. Just as in other involuntary acts of the human body. If person A and person B are both given a chemical depressant, let's say a tranquilizer, they will always fall asleep. They have no control over it, it is just an involuntary chemical reaction in the body. It does not seem to me that belief works with this same type of involuntary, automatic, mechanistic quality. For example, we could take a sample of 100 Americans and show them all the evidence in support of Darwin's evolutionary process. About half would afterwards support evolution, and half afterwards would say it is phoey. Although I have not seen the results of such a study, I think it is safe to assume that this would be the...

Uniformity does not follow from involuntariness: the tranquilizer example notwithstanding, different people sometimes have strikingly different reactions to the same drug. So different that a drug that cures one person kills another. Getting back to beliefs, I venture that even if two individuals were brought up in exactly the same environment, they would not end up with the same beliefs. But of course no two individuals are brought up in anything like exactly the same environment.

I was perusing the site, and I came up with this weird thought: Can a person think about the thought that they are thinking? Because at first I thought no... but then I thought by posing this question I was thinking about what I was thinking... but I started to doubt my thoughts... so I thought it might be a good idea to get a second opinion.

Yes, because we can think about our own mental states. Thus I may be thinking about espresso machines, and then wonder why I am doing that. The first thought is about those machines; the second thought is about the first thought. Desires may also stack up in this way. Thus I may desire a new espresso machine, but also wish that I didn't have that desire (say because its immature: I already have a perfectly good espresso machine).

If we built a computer that could analyse our minds, and it figured out how they work and explained it to us, would we be able to understand?

Maybe 'the true and complete theory of mind' would be too difficult for us to understand or to understand fully, but if so this is not because we would be using our minds to try to understand the theory of mind, but just because the theory was too difficult for us. Just as there is no paradox in using your eyes to look at your eyes (with the help of a mirror), there is no paradox in using your mind to understand your mind.

Pages