If we as professional and amateur philosophers are to accept psychology as a science (see question 5362 answered by Charles Taliaferro), then what does that make psychiatry? Even if a person fits all the criteria for having a mental disorder according to principles of psychology yet he refuses treatment, why should medical or pseudo-medical psychiatrists intervene so long as that individual decides to refuse to concur with the diagnoses or more importantly refuses to to acknowledge the treatment while accepting the diagnoses? Isn't psychiatry then just a mostly futile exercise in normative ethics to the whim of the patient?

I don't think much depends here on whether psychology or psychiatry is a science. What matters is whether they are doing good or not for patients. Supposing we think they are, a commonsense criterion seems to be that even if patients do not accept the diagnosis or the treatment, things change if they are contemplating harm to themselves or others, as the jargon has it. The jargon persists and is more than jargon because it has a real role; it is a criterion. A second criterion or perhaps test might be this. Can the patients get through the day or week to their own satisfaction, or look after themselves OK? If I have a friend who never gets up, doesn't bathe, won't look me in the eye, can't work and doesn't eat, and has gone down to eighty pounds, then I feel justified in intervening, because of this second criterion, over the stated wishes of the patient. Something is wrong, and I have an obligation, or some responsibility anyway, to help, especially if I am a psychiatrist. Most people in trouble do want...

What is the difference between determinism and the principle of sufficient reason? Thanks, Mark

Hi Mark, The principle of sufficient reason, due to Leibniz, states that there is always a reason why some particular thing happens, rather than some other thing. This does not immediately or obviously pose a threat to freedom. Note that "reason" does not mean the same as "cause", although a cause might be a reason. Determinism states something much stronger, more complicated, and more sinister. It tells us that the laws of nature and the initial state of the universe at some time in the past entail the state of the universe in the present. Entailment is a strong relation, and what determinism means is that if the laws are whatever they are and the initial state of the universe is whatever it is, then the the universe must (nota been, "must") go into the subsequent state. There is a necessary truth. It is that if the universe is in the initial state, and the laws apply, then the universe will go into the second state. Determinism has been held to pose a severe threat to freedom in the...
War

If a State A attacks another State B's military apparatus knowing full well that there will be civilian collateral damage, then why is it that even if State B retaliates by intentionally targeting civilians, it's terrorism?

State A may know that there may be harm caused to civilians, as a matter of probability, but without targeting civilians, for example during the invasion of Normandy. If State B targets civilians and does so with the intention of causing terror, that is a very different matter, for example in the Blitz and the bombing of Dresden. At least so says the just war theory. And there is sense to it. A defensive war could not be fought if one expected no unexpected harm to innocents. But intentionally and deliberately harming civilians or non-combatants is an element in terrorism and more broadly in military actions that are not just. All this (and more) is the point of the law of double effect, which is an extension of Aquinas' treatment of homicidal self-defense. In defence of myself, do I intend to kill my assailant? Or does that killing come about as a foreseeable but unintended consequence of my self-defense? Besides, Aquinas' example was concerned only with self-defense, and there is no imaginable...

Andrew is obviously right, but what he is proposing is actually a utilitarian basis for double effect.

I believe that God is the greatest conceivable being, and I also came to believe again, having been a former agnostic, that He really exists. My question is regarding the responses of some atheists to some traditional arguments for God's existence, most especially to the design argument, that for these designs in nature, we should not remove the possibility of a finite god, an evil god, or many gods who designed our universe. I think all those opinions are false because being the greatest conceivable being God cannot be finite or evil and there cannot be two greatest conceivable beings. But I just wonder why should God be the greatest conceivable being. Is it not possible for there to be a God or gods who are finite and/or evil and leave it at that?

Stephen is right. We should distinguish the Design Argument from the Ontological Argument. Your question concerns neither. Your question is about the Problem of Evil, so called. How can a being who is all-good, all-powerful and all-knowing allow evil to exist? The simplest way to solve this problem is to deny one of these three propositions, and it is perfectly acceptable to deny the second: God's power is limited. This approach is taken by process theologians, who say that God is developing . For the typical process theologian, as for the Mormon, God cannot break the laws of nature, for example. The trouble with this solution is not that it does not work for the theist. The problem is that it does not work for the traditional Christian theist, and as far as I know also for the Jewish and Muslim theist. A god with limited powers is simply not recognizable as the Creator of the Universe, the Father Almighty, and so on. So the solution is logically acceptable, but theologically unacceptable.

Can we assume that our pet dogs feel love towards us?

There are numerous complex issues here in the philosophy of so-called animal cognition or comparative ethology, but it seems to me that the burden of proof is with anyone who says no. The same issue arises, clearly, for human beings. So if we say that we do not know that the beagle feels love when he wags his tail and bays a bit and licks us and even gives us little nips behind the ears, and is obviously happy - more than happy - to see us, and delights in our presence, why would we not say the same about the human being doing these things, or their non-beagle equivalents? It's no good saying that he's doing it because we feed him. The same is true in the human case, but the manner of feeding is different, as is what is fed. It is difficult to imagine an ant loving us, but I think that is because there is no demonstration of affection from ants, no licking or running round in circles and so on. They would be ignoring us, if they were human and doing what they do. None of this is an assumption, though; it...

Hi, I'll just share my experiences as below and would just like to ask what principle or theory that could possibly explain the phenomenon? And what term you call it? I'm a computer programmer. Sometimes there are program logic related problems that I was trying to solve for hours, and yet cannot figure out the answers. But when I ask a colleague regarding the problem, in an instant, even before my colleague answers my question, I was able to draw the answer from my mind. Then, I'm going to tell my colleague, "uhm, ok, I know already! Thanks". It always happen. Sometimes, just the presence of another person would help you to resolve your problem.

You have described a fascinating phenomenon that I think is remarkably common, though I don't agree that it always happens It certainly happens frequently in my experience. Perhaps we both have very bright colleagues whom we happen to know very well, and can anticipate what they will say! I am delighted to see "the phenomenon" so well described. However, in the form you present it, I think most philosophers and psychologists would say that the question you ask is a psychological one, not a philosophical one, and that no doubt it is amenable to empirical research. Still, it does prompt a philosophical thought or two. I am put in mind of Wittgenstein's observation that 'In philosophy it is not enough to learn in every case what is to be said about a subject, but also how one must speak about it. We are always having to begin by learning the method of tackling it.' Perhaps when you ask a colleague about your problem, you have to decide not just what to say but how to say it, and that is enough...
Art

Can you name an attribute such that all the paintings which have this attribute are good paintings?

You might think that translucency is a good thing in a watercolor, but not in gouache. Versimilitude might be good in a portrait, but not in an expressionist landscape. And so on. On the other hand there is a logical (or with a stretch a "metaphysical") attribute that all good paintings have. They meet the criteria for excellence in paintings of that type. Helen Knight on the use of "good" in aesthetic connections is brilliant on this subject.

Hello philosophers , recently in a debate with Christians , I made a point that if one claims a relationship with a God or being that can't be seen , heard or touched that they are suffering from a delusion; is this an unfair statement and if so why ?

It's fair if it's a good argument. But is it? Is your premise true? Your Christian friends could have replied that when they think about the number three, say, or any other abstract entity, they have a relationship with it. Yet one cannot see, hear, or touch such a thing. Nor does the other entity have to be abstract. One could reasonably claim to have in a conversation a relationship with another mind, but minds, even though they are concrete, "can't be seen, heard or touched", and people who make such claims are not suffering from a delusion.

I was puzzled not to find any mention of "emptiness" (as expounded upon by Nagarjuna and Chadrakirti, not the feeling one blogger has when his relationships end.) Is that not an issue that our learned philosophical crowd seriously contemplates these days?

I have to say I think about nothing all the time, both in the sense of not thinking about anything, and in the sense of contemplating the concept nothing . P.L. Heath has a very fine piece on "Nothing" in the Encyclopedia of Philosophy . On the other hand I suspect that śūnyatā is not really emptiness or literally nothing - śūnyatā is a kind of non-substantiality, certainly, but in Western metaphysics that does not mean a non-entity. The point is that śūnyatā is emptiness, but of the detritus of external influence, or void or outside dependence. It has its own quality. It is also like a kind of expectant fulness, empty as the rich expectation of a joyous future event is in an obvious sense empty (of the event) compared with the experience of the event itself. A quality however is an entity, though not a substance.

What is the essential relationship between rules of grammar and a living language? Is it primarily descriptive or prescriptive? I am fluent in 3 languages, and it seems to me that native speakers, especially in Chinese, rarely know much about the "rules". Native speakers, instead, are confident. They don't worry that what they say is "wrong", and they're also inventive and creative. American kids are fond of saying about someone that "so-and-so is boss". It seems to me there are many instances now of nouns being converted to adjectives: so-and-so might also be "legend", for instance. Native speakers don't fear that they are "wrong". Does grammar play catch up, like law with technology, or does grammar just unctuously go on insisting that such statements are "wrong"?

It is possible to have a view in which a grammar has some description, and a bit of prescription. If someone speaking English used only "sentences" without verbs, then a bit of prescription might be in order, and the response that "grammar is descriptive not prescriptive" would or should fall on very deaf ears. For the language at the moment does have verbs. And the same is true if someone confuses "infer" and "imply", for example. On the other hand in American English "insure" has replaced "ensure", so "insure" in US parlance now has two meanings, and one of them ("to make sure that") may be in the process of being forgotten. I find this hard to take, myself, but a descriptivist has a case here too. It is obvious that most living languages do change over the years by shifts like this one. But the basic structure of the sentence is fairly stable over a long time. I do not think that English could suffer the loss of all its verbs and remain in any reasonable sense the same language. About changes over time...

Pages