If we were able to create a computer that functions exactly like a human brain, when does this "artificial" intelligence stop being artificial? I suppose what I'm trying to say is that if this computer could truly learn, and be programmed in such a way as to develop emotions just as humans do, when does it become real? When is it not right to just plug it out and "kill" it?
Many people would, I'm assuming, argue that a computer isn't living, or isn't biological. (As posed in an earlier answer, that's not particularly valid; we all weed our gardens.) It comes down to emotion as far as I'm concerned.
I'm finding this question particularly difficult to phrase, and the more I type the more I think that the question is going to come across as all over the place, so I'm going to stop at that and hope for the best! If there is no response I will try again another time.
The “artificial” in “artificial intelligence” describes the origin of the form of intelligence, as the result of artifice rather than nature. Both artifacts and natural things are real. However, until the advent of computers, the term “intelligence” was rarely applied to artifacts. Thus, thinking that artifacts can be intelligent involves conceptual change. You embrace this change, and suggest that a computer that functions the way intelligent humans function are indeed intelligent. Many philosophers of mind agree with you. You further suggest that emotion is a necessary component of an intelligent being. This is a bit more controversial. You may have watched Watson, IBM ’s computer, beat the world’s best human Jeopardy players recently. Many would be inclined to say that Watson is intelligent, but lacks emotion. Your final, and most provocative claim is that such artificially intelligent entities have moral status, that under some circumstances, it would be wrong to unplug them. (See the termination...