ASK A QUESTION
click to expand
September 9, 20112 responses
Miriam Solomon and Gabriel Segal
December 20, 20112 responses
Miriam Solomon and Gabriel Segal
January 3, 20123 responses
Andrew Pessin, Richard Heck and Gabriel Segal
This is a great question, and one with a very long history. There's a key ambiguity in it though, that should be clarified at the start: 'what would it have to be for us to consider it sentient?' might be read metaphysically or epistemologically. To read it metaphysically is to ask what, in fact, is sufficient for the robot to be sentient; to read it epistemologically is to ask what evidence would be sufficient for us, or any third party, to judge that the robot is sentient. The difference is important because it might be that there is some essential feature to sentience, but it is not one which would ever allow us to judge with any confidence/reliability that some creature other than ourselves possesses it. ....
That said, a good starting point for you would be Descartes's Discourse on Method, where he argues (in brief) that the possession of genuine linguistic competence and general rationality are marks of the 'mental', or of 'sentience' broadly construed; he holds that no purely mechanical/physical account could ever explain why a creature demonstrates those properties, and while his account is dated, there's no question that 'language' and 'reason' remain very challenging things even today, for researchers in Artificial Intelligence to instantiate in 'robots.' Then, after Descartes, skip a few centuries and read John Searle's famous and controversial paper, "Minds, Brains, and Programs" originally in the journal Behavioral and Brain Sciences, in 1980 -- which set of a decades-long debate over whether any computer or computer program could ever actually instantiate mental states (as opposed to merely mimic them). If you read that paper, and then google 'responses to Searle's Minds Brains Programs' (or more generally 'responses to Searle's Chinese Room Thought Experiment') you will get plenty for you to chew over as you contemplate your excellent question!
hope that's a useful start --
The other classic paper on this issue is Alan Turing's "Computing Machinery and Intelligence", from 1950, which articulates what has come to be known as the "Turing Test". Turing's idea was to set up an experiment. A modern version might use some kind of internet chat program. You are talking with two other "people". One really is a person. The other is a computer. You can talk to them for as long as you like, about whatever you like. Then if you can't tell the difference, Turing says, the computer is intelligent. Obviously, this is, at first blush, what Andrew calls an "epistemological" approach to the problem, but Turing doesn't see it just that way.
Let me mention, by the way, that 2012 is also the "Alan Turing Year", celebrating the 100th anniversary of his birth. Turing had a very interesting, and tragic, life. Not only was he one of the founders of modern computer science, he put his genius to work for the British military during World War II and helped crack the German codes. The tragic part lies in Turing's being prosecuted for homosexuality in 1952 and then being forced to take female hormones as "treatment" instead of being sent to prison. He committed suicide in 1954, at the age of 41.
January 18, 20122 responses
Stephen Maitzen and Gabriel Segal
"I recognize that there is some not inconsiderable
paradox in doubting the very idea of being able to form a thought and
using thought to achieve that doubt". Well spotted! Suppose that your doubts about memory lead you this: "I cannot trust any thought, including this one". Where do you go from there? It doesn't look as though the paradoxical nature the thought undermines it in such a way that you can conclude that it is false, and proceed to trust some thoughts. It sort of leaves you with nowhere to go.
I agree with Stephen. Memory is not that unreliable. It is much less reliable than we think. When we seem to remember things our brains seem to do a lot of construction and interpretation, and present to us a partly made-up image of some past even as if it were a perfectly accurate representation. This can get us into trouble. But our short-term memory is pretty good and serves its purpose. It is not hard to keep track of the thoughts involved in a short line of reasoning. It also gets a lot easier if we write them down. We can the create longer lines of reasoning by understanding shorter ones and stringing their conclusions together, keeping track of the overall structure. Memory, combined with pen and paper (or today's equivalents) is good enough to support reason as a discursive process.
January 3, 20121 response
November 17, 20111 response
October 11, 20111 response
After reading this question, I first tried to transmit my answer to you telepathically, but it wasn't working, so I thought I'd try this more traditional method.
In any case, it strikes me that in order for something to count as telepathy, one would have to have some sort of direct and unmediated access to the thoughts of another. Suppose Jane and Clone Jane both go see a movie and, because they have identical genetic makeups and (let's suppose) similar past experiences, they each think something like "The ending would have been more emotionally satisfactory had the hero not gotten killed." Each of them is having the thought for the same reason, but they are each having it independently. That is, Jane's thought has no causal connection to Clone Jane's thought, and Jane does not have any direct access to Clone Jane's thought. And vice versa/ Their thoughts have similar (parallel) explanations, but no direct linkage to one another. So I don't see why this should count as a form of telepathy.
One way to put the point: Though Jane and Clone Jane "share a thought" in the sense that they each have an independent mental state with identical content, they do not "share a thought" in the sense that they each have access to the very same mind/brain state.
September 15, 20112 responses
Andrew Pessin and Andrew Pessin
Hm, good question. But does your question have an implicit premise -- that we do, or think we should, 'force' such people to change? When your conditions are truly met -- they're happy, not dangerous, and, presumably, adequately self-sufficient -- I'm not sure many people DO think we should treat them 'just for the sake of sanity' .... There's a nice novel called "Unless," by Carol Shields, that partly explores these themes -- a young woman suddenly decides to adopt a very alternative lifestyle and her very conventional mother can't help but think there must be something 'wrong' or 'mentally unstable' about her -- raises the question of when does 'difference' become 'illness' -- which I think is just underneath the surface of your question ....
hope that helps--
August 6, 20112 responses
Andrew Pessin and Jasper Reid
My first question wouldn't be how -- since it does seem to me that such people can clearly be 'conscious' in most senses of that word, and dreams often recreate (perhaps altered versions of) conscious experience -- but rather of what? But then again, presuably the answer is of material evident through their other functioning senses ..... Why wouldn't you accept that as an answer? Or are you imagining that a blind and deaf person lacks all conscious experience altogether?
August 4, 20112 responses
Gordon Marino and Jonathan Westphal
<first> <previous> 2 3 4 5 6 7 8 9 10 11
RETIRED PANELISTS (show)