During a 'debate' with a friend about same sex marriage, he raised the issue of marriage being 'by definition union between a man and a woman', and appeared to hold that this was grounds for rejecting same sex marriage. My question does not relate to the ethics surrounding the issue, but rather to the fallacy I thought he had commited in saying this. It seemed to me as if he was stating the conclusion of an argument that had not been argued (at least, not by us either at or prior to that time) namely whether marriage is, in fact, the union as mentioned - is this what is known as 'begging the question' (i.e., stating a point that remains to be proven as foundation for another conclusion)? If not, then what is the formal term for this fallacy (if it is, indeed, fallacious)?

There certainly is a fallacy here, but I don't know what it should be called. In the end, perhaps it is a simple fallacy of equivocation: an equivocation between two senses of the word "definition". Philosophers since Aristotle (at least) have distinguished two types of definitions: definitions of words and definitions of things . Personally, I find the latter notion hard to understand, but the idea is that a definition of a thing tells you what it really is. So gold, for example, might be defined as that element that has atomic number 79. This is very different from saying that the word "gold" is defined this way. The word "gold" could not have been defined that way before the establishment of modern chemistry, but, nonetheless, the true definition of gold—what gold really is—is: the element with atomic number 79. You cannot, therefore, find out what the real definition of a thing is by consulting a dictionary: That will tell you only how the word is defined. What the "real definition"...

Natural language statements have quantifiers such as, “most”, “many”, “few”, and “only”. How could ordinary first-order predicate logic with identity (hereafter, FOPL) treat statements containing these vague quantifiers? It seems that FOPL, with only the existential and universal quantifiers at its disposal, is insufficient. I read somewhere that ‘restricted quantification’ notation can ameliorate such problems. Is this true, or are there difficulties with the restricted quantification treatment of vague quantifiers? What are some of the inference rules for restricted quantification notation? For example, in FOPL you have the existential instantiation and universal instantiation inference rules. Are there analogue inference rules for the quantifiers, "many", “most” and “few”? Can you recommend any books or articles that outline, critique or defend restricted quantification? I also read that there are issues with FOPL regarding symbolizing adverbs and events from natural language. Is this true...

One further point. Toward the end, you write: These seem to be grave problems for theapplicability and effectiveness of FOPL to natural language arguments.(I am not referring to the “limits” of FOPL where extensions such asmodal, tense, or second-order logic might accommodate the richer partsof natural language, but rather to the apparent inability of anylogic(s) dealing with these problems.) Waiving the issue about vagueness, there isn't any problem dealing with such quantifiers in a second-order context. Both of the quantifiers I mentioned, "Most" and "Eq", can be defined in second-order logic, so the caveat at the end kind of gives the game away. That said, what perhaps is puzzling about these quantifiers is that, as is the case with second-order quantifiers, there is, as I said, no sound and complete set of rules for them, with respect to the intended semantics. In that sense, there is no "formal" logic for these quantifiers. But, again, that is not to say that one cannot write down some...

There are a lot of different questions here, and we need to disentangle some of them. First, some of the questions you are raising about "most", "few", and the like have nothing to do with their vagueness. Consider, for example, a quantifier I'll write "(Most x)(Fx;Gx)". This is what is called a binary quantifier (similar to your "restricted" quantifiers): Unlike the usual way of representing "all" and "some", it forms a formula from two open sentences. Now, define the quantifier, semantically, so that "(Most x)(Fx; Gx)" is true if, and only if, there are more Fs that are G than there are Fs that are non-G. (More generally, we'd have to talk about satisfaction, but waive this complication.) It can be proven that this quantifier cannot be expressed by any formula of FOPL. It can also be shown that there is no sound and complete axiomatization of the logic of this quantifier. That isn't to say you can't write down some sound rules. But you can't write down a complete set of rules: No matter...

Is there such a thing as formal inductive logic? It seems to me that whether or not an inductive argument is good or not depends on its semantic content, not just its syntactic form, which makes it impossible to formalize in the way that deductive logic is formalizable.

There have been several attempts to formalize an inductive logic, but none that have been as uniformly successful as formalizations of deductive logic. See the Stanford Encyclopdia entry for more.

Does the relation of self-similarity exist? It seems obvious that it does, since nothing is self-dissimilar. But if it does then it, as a relation, must be self-similar, and this second relation of self-similarity must be self-similar, and so on ad infinitum. And surely the Universe is not crammed with an infinity of relations of self-similarity. But does that mean that nothing is self-similar?

I lost you right here: "this second relation of self-similarity must be self-similar". What is the "second" relation? I thought it was just the relation of self-similarity, which, as you say, the relation of self-similarity presumably has to itself, since everything is self-similar. It's perhaps worth noting that we can construct an analogous set of questions using identity rather than self-similarity: Everything is identical with itself; so the relation of identity is, as a relation, also identical with itself. But again, the relation identity has to itself is just identity: The relation of identity bears itself to itself. So it doesn't seem to me that this line of argument requires "the Universe [to be] crammed with an infinity of relations of self-similarity", and I'm not sure why it seems so obvious that it isn't. That said, however, the kind of language you use here—talking of a relation standing in itself to itself—can cause problems. Some relations, as we have just seen, stand to...

Hello philosophers. I was just wondering about Gödel's Incompleteness Theorem. What exactly is it and does it limit what we are capable of knowing? I have no training in mathematics or formal logic so if you could reply in lay terms, I would appreciate that. Thanks, Tim.

Regarding your second question, whether the incompleteness theorem limits what we are capable of knowing, people disagree about this question. But the short answer is: There is no decent, short argument from the incompleteness theorem to that conclusion. If it does limit what we are capable of knowing, then it will take a very sophisticated argument to show that it does. One might think it followed from the theorem that we cannot prove that PA is consistent. But we can. I proved it yesterday, in fact, in my class on truth. The incompleteness theorem says only that we cannot prove that PA is consistent in PA , if PA is consistent. (If it's not, then we can prove in PA that PA is consistent! But that won't do us much good, since we can also prove in PA that PA is not consistent, and indeed prove absolutely everything else in PA, e.g., that 2+3 = 127.—This last remark assumes that we have classical logic at our disposal.) So when I proved that PA was consistent, I didn't do so in PA. I...

In response to question 26 [http://www.amherst.edu/askphilosophers/question/26], should it not be possible for an omnipotent being to create the possibility for a contradictory object to exist?

I'm not sure why that should be possible. Indeed, suppose we accept that it is not possible for an omnipotent being to make some contradiction true. Then—if we assume that anything possibly possible is possible (this is the modal axiom known as "4")—it follows immediately that such a being cannot make it possible for a contradiction to be true, either. If s'he could, then it would be possible that it was possible for a contradiction to be true, in which case it would be possible for a contradiction to be true, which it is not. That said, there are some philosophers who think that some contradictions are true, and they would have an easier time, I take it, with this kind of question.

A question about logic. When symbolizing and making inferences in natural languages that contain such terms as "it is necessary that", "A ought to do X", "A knows X", and "it is always the case that", there are extensions of classical logic, respectively, modal, deontic, epistemic, and tense logic that attempt to deal with such natural language analogues. My question is: What about propositions that contain a mixture of all the above terms? For example, there are sentences in natural language of the form “It is necessary that John ought to always know that 2+2=4." Is there a logic that can effectively handle (i.e. symbolize and correctly infer) such propositions? If so, is this logic both sound and complete? If there is no such logic, what is a logician to do with such propositions? My intuition is that things get tricky when you mix these operators together and/or the classical quantifiers. Thanks kindly for your reply, A Concerned Thinker

Things get tricky anyway when you mix modal operators and the quantifiers, so it's best if we just leave it to the propositional case. And I'll add, just by the way, that it is quite controversial whether such "operator" treatments are correct for any of these cases, more so for tense, perhaps, than for the rest. Most semanticists nowadays, I believe, would take tense in natural language to be quantificational. And David Lewis, of course, held the same about "necessarily". There are really two kinds of questions here: Can one write down some plausible logical principles governing (say) a language with two such operators? And then, can one develop a semantics for this language and prove soundness and completeness? As for the former question, the interesting issue is what principles should connect the two kinds of operators. Presumably, for example, we should have "If it is necessary that p, then it is always the case that p", but not conversely; and I suppose some people would have us assume "If...

A friend once had me consider this logic. Because the Catholic Immaculate Conception doctrine is a cornerstone tenet of the church, but is essentially a dogmatic belief, any dogmatic doctrine canonized by the church must also be as worthy of faith as the Immaculate Conception doctrine. However the doctrine of transfiguration is also a dogmatic belief. Yet even after a priest has blessed the sacramental wine and bread, in reality it does not literally transfigure into the blood and body of Christ even though the doctrine of transfiguration states that it does. If the wine does not literally turn to blood, the doctrine of transfiguration is wrong and because the doctrine of transfiguration is equally as valid as the Immaculate Conception, it too is also wrong by association. However, if the Christ were literally made of bread and wine, then all conflicts would be resolved. Can you please comment on this logic? Thank you

I'm not sure there's much "logic" there, frankly. First, the relevant doctrine is that of transubstantiation , not transfiguration. The latter term refers to the events described in e.g. Luke 9, when Jesus appears "transfigured" in the presence of Elijah and Moses. Second, I'm not entirely sure why it is so obvious to you, or to your friend, that the consecration does not transform the elements into the body and blood of Christ. The fact that they do not look much like flesh and blood has nothing to do with it. (The Wikipedia article on the topic is excellent, by the way.) That said, transubstantiation is controversial within Christianity. It is, as was said, a pillar of the Catholic faith, but it is not widely accepted outside Catholicism. I see no reason to suppose that all "dogmatic doctrine[s] canonized by the [Catholic] church" must stand or fall together. One might reason thus: If one of them turns out to be wrong, that diminishes whatever general reason one had to suppose that official...

As a beginner in philosophy, I got the impression that philosophy is all about arguments. You put in statements (premises), use some rules of argumentation to manipulate these premises, and reach other statements (conclusions). Is there a way to argue for the rules of argumentation themselves? I mean, we use them all the time but how do we know that they are true? What kind of rules would we use to prove the rules of argumentation? Can we use the same rules? Thanks.

This is a difficult and somewhat contested question. Obviously, you cannot argue for the rules of argumentation except by arguing, and if there are rules of argumentation that must be followed if an argument is to be compelling, then one had better follow them. So there is, obviously, a kind of circularity in any such argument. But it is an unusual, and somewhat confusing, kind of circularity. The worst kind of circularity is what is called "begging the question". That is when you simply assume, perhaps tacitly, precisely what you want to prove. It's a bad kind of circularity because, if you assume that P, you can hardly help but reach the conclusion that P. That's not what's happening in the case we're considering. Suppose we're trying to prove the logical principle of disjunctive syllogism, which says that, if "A or B" is true and "not A" is true, then B must be true. We might argue for it as follows. Suppose that "A or B" is true and that "not A" is true. Since "A or B" is true, either A is...

If God is omnipotent then surely he can do anything!? My intuition tells me he can defy logic because surely he created it. I know philosophers will then ask me if it is possible for God to create a world where it both doesn't rain and rains at the same time. I am then forced to say that of course this doesn't seem possible. But...this leaves me with two questions: (a) Why do philosophers always have to talk about 'possible worlds'!? (b) Surely a world of contradictions only seems implausible to us because we are reasoning from the knowledge and experience we have in this world. We can't conceive of such ideas as not raining and raining at the same time because we are bound by the logic of this world.

On (a), I might add that possible worlds are an extremely useful tool. As Lynne mentioned, there are many different ways to understand what they are supposed to be. For many purposes, however, one can simply regard possible worlds as a certain kind of mathematical construct. The reason they are then so useful is that they allow us to understand, in a precise way, the logic of such expressions as "necessarily" and "possibly". In fact, there are many notions of necessity and possiblity, and their logics can be very different. Perhaps more importantly, though, possible worlds help us to get a grip on so-called "counterfactual conditionals", like "If JFK had not been assassinated, the US would not have become as seriously involved in Vietnam". It might interest you to know that, even before "paraconsistent" logics came on the scene—these are the logics that allow contradictions to be true...and, of course, false—logicians were interested in logics of necesssity and possibility that allow "It is...

Pages