ASK A QUESTION
click to expand
October 18, 20121 response
August 9, 20121 response
Perhaps we might start with a distinction between two things the accusation of nonsense might mean. One is that it's patently false; the other is that there's no coherent idea. Your worry is pretty clearly the latter, and I'm sympathetic: whatever exactly Jung meant, it's hard to be sure that one has gotten hold of it. With that in mind, my sense is that there's an interesting idea behind the notion of synchronicity, though not one I'm inclined to believe.
Insofar as I understand it, synchronicity is meaningful coincidence. More particularly, it's meaningful coincidence between an inner state of mind and an occurrence in the outer world. By saying that synchronicities are coincidences, Jung meant that neither of the events causes the other. By saying that the coincidence is meaningful, Jung seems to mean two things. The first, and more obvious, is that the outer event corresponds in a meaningful way to the inner state. In one of Jung's well-known examples, a patient is recounting a dream about a golden scarab. At that moment, Jung heard a noise outside his window. He opened it, and a beetle flew in - one with an iridescent coloring that suggested the golden beetle of the woman's dream. Jung grabbed and and presented it to the patient, with the words "Here is your beetle." According to Jung, this led to a breakthrough in the woman's treatment. The apparently meaningful correspondence is clear enough. The second aspect of this meaningfulness is that such events are not accident or chance; not coincidence in the sense of what we might call mere coincidence.
This obviously raises a good many questions. One is why we should believe that cases like tjis are not mere coincidence. Jung seems to have thought that apparently meaningful coincidences happen more often than chance would predict. If that were true, it might provide some evidence for the existence of genuine synchronicities, though how one would go about collecting the evidence, let alone calculating the relevant probabilities is very hard to say. And even if we were able to establish that meaningful coincidences happen more often than chance would predict, it would take yet further argument to decide whether that such "connections" were cases of one event causing the other, or cases of both events having a common cause or yet some other sort of relationship.
Of course what Jung had in mind fits into a broader picture in which meaning is woven into the universe itself. In fact, Jung's outlook has more in common with the views of, say, Renaissance figures such as Marsilio Ficino or Heinrich Cornelius Agrippa than with the way those of us who admire science look at things. This is part of what makes it hard to get a grip on; we aren't used to thinking that way. For my own part, I don't share Jung's outlook, but I find the exercise of trying to grasp its outlines a fascinating one. I'm deeply skeptical of Jung's view, but I'm not prepared to say that the idea of synchronicity is simply unintelligible.
July 19, 20121 response
Suppose we individuate concepts by "reference," so two mental states/thoughts are identical if they are about the same things, otherwise different. If (arguably) one twin's thoughts 'refer to' H2O and the other's 'refer to' XYZ, then they would count as different thoughts or concepts. What you are merely presuming is that the notion of 'concept' should be narrowly individuated (ie defined only in terms of what's 'in the head', so the two twins shoudl have the same concept). But that is the very thing that is explicitly being debated in the classic papers by Putnam, Burge, and all the rest ....!
July 26, 20121 response
You may not realize it, but you have presupposed the answer to your question in the way you asked it. You speak of the computer "trying" to execute a code. Trying involves intending to do something. So if you are not speaking metaphorically, you are presupposing that computers can have intentions, and that the computer in your case already has one. If the computer can really be said to be trying, then the additional detail in your example (viz., that there's a temporal gap between the computer's beginning to try, and the execution of the intended act) doesn't matter.
Now maybe you meant to be using the term "trying" loosely, or metaphorically, and then your question was whether the term "intention" could be strictly and literally applied to a computer. That's a good question. The answer, however, is not going to depend on whether there's a a temporal gap between the trying and the successful execution. You can see that if you consider some non-controversial cases of something's intending to do something. So suppose that I intend to type the letter "x". There's probably very little time between the formation of my intention and the initiation of the motor routine. (Note -- some neuroscientists and some philosophers think that there's empirical evidence that the initiation of at least some motor actions precedes the formation of the intention. It certainly seems to be that our awareness of the formation of an intention can come after the action has been initiated. But the matter is controversial.) In other cases, as for example when I form the intention to write a philosophy paper, there can be an extremely long gap between my forming the intention and my executing it. So timing is not the important factor.
What is important? First is what it is for something to have an intention; the next thing is what it takes for something to meet those requirements.
I think that an intention is the product of a desire for something and a belief about the means necessary to obtain it. So to have an intention, one must at least be the kind of thing that has beliefs and desires. That's a pretty neutral claim. Most contemporary philosophers will agree with it. Some philosophers will add that a thing also needs to be capable of action in order to have an intention. That's a little more controversial, depending on what's meant by "action". If mental actions count (like doing sums in one's head, or recalling the words to a song), then it's also pretty uncontroversial. But let's focus on the first necessary condition: beliefs and desires.
So: what does it take for something to have beliefs and desires? Here, you'll get different answers from different philosophers. But here's mine: I am a computationalist about the mind. I think that beliefs and desires are certain kinds of functional states, involving relations to representations. So I see no reason why a computer could not, in principle, have beliefs and desires. But there's another requirement for something to have a mind, and that's that the representations have to have genuine meaning (confusingly, philosophers use the term "intentionality" to mean "genuine meaning" as well as to mean "being in a state related to intentions"). Currently existing computers operate with merely formal representations -- any meaning the representations have is meaning that we, the designers and users, choose to impute to it.
Now as I said, philosophers are going to disagree about all the elements of my view. But the main thing is that most philosophers do think that being able to form intentions requires having a mind, and they think, further, that it is a real fact about the world that some things have minds and some things don't. An interesting exception is Daniel Dennett. He denies that there is any specific property or form of organization that is necessary for something to have an intention; he thinks that as long as a thing's activity can be usefully described in intentional terms -- terms like "believe," "desire" and "intend," then that thing can be truly said to have intentions. As he puts it, for a being or a system to have mental states is for it to be fruitful for an observer to take the "intentional stance" toward that being or system. What's crucial about the pattern of activity, what makes it interpretable as intentional, is that the activity looks rational. So, for example, if you are playing chess or hearts (more my speed) with a computer, you might find yourself wondering what move the computer "is thinking about making". And the way you might think about this is by pretending that the computer knows certain things -- the rules of the game, the moves that have already been made, the moves that are open to it -- and wanting certain things -- to win the game -- and then figuring out what any rational being would decide to do in those circumstances. What Dennett would say is that if you are able to sustain play this way -- if what the computer does continues to look rational in light of what you are pretending to be its beliefs and desires, then you are not pretending. All it is for the computer to be really intending things, Dennett would say, is for its behavior to display a pattern that makes it fruitful to attribute beliefs and desires and intentions to it.
So Dennett might well say that the computer in your example is intending to execute the code, regardless of it satisfies all those other conditions I gave. But he'd want to know more about the computer's behavior, to see whether the pattern supports our taking the intentional stance. But again -- the time between intention formation and execution is not pertinent.
Dennett, by the way, thinks that nature itself is an intentional agent, because we can think of natural selection as a rational process. And some philosophers, like Deborah Tollefson, who agree with Dennett about intentionality in general, think that groups of agents, things like the Supreme Court, can literally have intentions. I think that, whether or not one wants to use the term "intention" the way Dennett recommends, there's still going to be a difference between the kinds of beings that satisfy the conditions I sketched and those that don't, and that the difference is important for lots of reasons.
But still -- nothing depends on time!
July 12, 20121 response
July 12, 20121 response
Just a brief answer -- but to me (anyway) the idea of there being a property-dualism, closely related to a concept-dualism, is more plausible (and even more intelligible) than the idea of there being a substance-dualism, as implied by the phrasing of your question. Certain kinds of properties (such as being a sensation, or a thought, or some aspect of consciousness) may well be non-identifiable or non-reducible to standard physical properties (such as being a brain state/event/property) -- but to go from there to the conclusion that "there exist non-physical souls or minds" seems like a very large, hard to defend, and unnecessary leap .... (and then from there to "immortal" -- well that brings in a whole extra set of religion-related issues that probably are best left out of discussions of dualism itself, IMHO) ....
As you probably know, the locus classicus of substance dualism is Descartes -- though my own feeling is that the substance part of it is a little overblown by his subsequent interpreters, and that he may be more cautiously read as a property dualist ... But in the 21st century you might want to begin by exploring the work of David Chalmers on this subject (if you haven't already) ...
hope that's a useful start ...
July 12, 20122 responses
Eddy Nahmias and William Rapaport
By "modern philosophers" I am assuming you mean contemporary philosophers. (We philosophers use "modern philosophers" to refer primarily to European philosophers from roughly 1600-1900, and among that group there are a number of substance dualists, including Descartes, Malebranche, Leibniz, and arguably Kant).
Among contemporary Western philosophers, there are not that many substance dualists, though it is making a bit of a comeback recently. Of note are E.J. Lowe, Richard Swinburne, and (I think) Alvin Plantiga. I am likely leaving out others. There is an even bigger resurgence of "property dualists", people who argue that the universe consists of just one kind of substance, but all (or some) of that substance has both physical properties and mental properties. David Chalmers played a big role in motivating this position. Recently, Susan Schneider (if I understand her correctly) has argued that you can't be a property dualist without accepting substance dualism.
The dominant position in philosophy of mind is physicalism (the view that everything that exists consists of stuff that can be described in the language of natural sciences), and among the physicalists there are reductive physicalists and non-reductive physicalists. But I won't drone on any longer about all this!
This SEP entry on dualism may be of interest: http://plato.stanford.edu/entries/dualism/
July 5, 20122 responses
Oliver Leaman and Peter S. Fosl
July 5, 20121 response
<first> <previous> 1 2 3 4 5 6 7 8 9
RETIRED PANELISTS (show)