I presently working through Grayling's Introduction to Philosophical Logic (Blackwell), after studying philosophy at university in the late 1960s. Can anyone recommend a follow-on text (for when I feel I have assimilated this book)? (I have seen the interesting replies to the August post about further reading on symbolic logic.) Peter

There aren't a whole lot of textbooks on this sort of thing. A more current text is John Burgess's Philosophical Logic . And, depending upon your interests, you might have a look at something like Graham Priest's Introduction to Non-classical Logic . Working through a serious textbook on modal logic would also be worth doing. The two classics are by Chellas and by Hughes and Cresswell. A quite different route would be to look into linguistic semantics. Many forms of philosophical logic—tense logic, modal logic, epistemic logic—originated as attempts to deal with some of the features of natural language that are omitted by quantification theory. But the relation between the logical treatments and natural language were always pretty obscure, and around 1960 people started to get much more serious about dealing with natural language in its own terms. Formally, much of linguistic semantics looks like philosophical logic (especially in certain traditions), but it is targeted at an empirical...

I have two question abouts logic. In occidental thought, logic generally is presented as bi-polar (no pun intended); however, I am not quite sure of the 'correct' formulation. Is it: either "A is true," or "not A is true" or is it: Either "A is true," or "A is not true." and if you could expand on the distinction a little perhaps? More interesting, to me, is that I have heard the assertion that there also exists a "quadrant" logical system that also works. Either: "A is true" or " Not A is true" or "Neither A nor Not A are true" or "Both A nor Not A are true" This latter form of logic seems to work for things like polarization of light, for example (if you have a vertical polarizer in front of a horizontal polarizer, no light gets through; but if you insert a polarizer at a 45 degree angle in between the two, some light gets through). I can think of some other examples as well; I prefer to hear your responses.

Classical logic (at least, understood from the perspective of classical semantics) rests in part upon the so-called "law of bivalence". This is usually formulated as: Every formula is either true or false. To put it slightly differently, we can begin with the idea (which emerges from Frege) that the "semantic value" of a formula is its truth-value. Then classical semantics involves two claims: (i) every formula has exactly one truth-value; (ii) there are only two truth-values, Truth and Falsity. This formulation does not involve any reference to "not", so the contrast between "not: A is true" and "A is not true" is avoided. There are several sorts of alternatives to this. The one you mention can be presented in several different ways. One involves retaining (ii) but dropping (i) and, indeed, not replacing (i) with anything. So, on this view, a formula can have zero, one, or two truth-values. Logics can be built on this idea, and they "work" in the sense that one can rigorously present them, study...

Does Goedel's incompletness therom demonstrate that logic cannot be shown to be consistent and complete because we cannot prove a system of logic without relying on logic or begging the question? In other words; does it reveal a fallacy of "pretended neutrality"?

No, I do not think the incompleteness theorem has any such consequence. First of all, although the incompleteness theorem does apply to some formal systems some people would classify as systems of logic (e.g., second-order logic), its primary application, as usually understood, is to formal systems of arithmetic or, more generally, or mathematics. What people usually mean by "logic" is first-order classical logic, and the incompleteness theorems do not tell us anything about its consistency or validity. (There is a nice proof of the undecidability of first-order logic that rests upon the completeness theorem.) There are two versions of the incompleteness theorem. The first shows that no (sufficiently strong) formal system is ever complete (if consistent), in the sense that you can either prove or disprove every statement formulable in that system. The second shows that you cannot prove such a system to be consistent without relying upon assumptions that are, in a very specific sense,...

I have a question about existence withing a formal system. Can we construct it so that a theorem t implies "there exists" theorem t itself? Thanks, Paul

I'm not quite sure what the question is here, but here's what I think is meant: Can we construct a statement S such that S implies that S itself exists? If that is the question, then the answer is "Yes", assuming we have some fairly minimal syntactic resources (namely, those sufficient for the purposes of Gödel's theorem). If we have those resources, then we know, by the so-called diagonal lemma, that, for any formula A(x) we can find a sentence G such that the following is provable: A(*G*) <--> G where *G* is itself a name of the sentence G. G itself will be fairly long, but is something that can actually be written down. So now let A(x) be the formula: ∃y(y = x) I.e., this means "x exists". Then, by the diagonal lemma, we have a formula G such that we can prove: ∃y(y = *G*) <--> G I.e.: G itself is provably equivalent to the statement that G exists. We can in fact say more. If we have the syntax we need, then we can prove that G exists. Hence, we can prove G itself,...

Does "if p=q, then nec(p=q)" hold, if "p" and "q" are intended to denote properties? I am told that it holds. But it doesn't seem to be quite right. It seems to depend on what it is for two properties to be identical. Am I confused?

If two properties P and Q are the very same property , then how could they have been different? One could, indeed, have discussions about what it is for P and Q to be the very same property. But, if true identities are always necessary, then that fact acts as a constraint on theories of property identity. For example, suppose someone suggested that properties are the same just in case they are (actually) true of the same things. Then it would be an objection to this view that there are properties that are true of the same thing but might not have been.

So, it's my understanding that Russell and Whitehead's project of logicism in the Principia Mathematica didn't work out. I understand that two reasons for this are (1) that some of their axioms don't seem to be derivable from pure logic and (2) Gödel's incompleteness theorems. However, particularly since symbolic logic and the philosophy of mathematics are not my area, it's hard for me to see how 1 & 2 work and defeat the project.

I think it's important to distinguish the two sources of "failure", not so much as regards Principia but as regards logicism quite generally. I'll stick, as Prof Smith did, to arithmetic. Here's a way to put Gödel's (first) incompleteness theorem: the set of truths expressible in the language of first-order arithmetic cannot be listed by any algorithmic method, i.e., it is not (as we say) "recursively enumerable". Now why is that supposed to show that logicism fails? Because the set of theorems of any first-order formal theory is recursively enumerable. This is a consequence of Gödel's first great theorem, the completeness theorem for first-order logic (and also of what we mean by a "formal" theory). So the truths aren't r.e. and the theorems are—you can list the theorems but not the truths—so the theorems can't exhaust the truths. Now why is this a problem for logicism? Obviously, as the argument has been stated, it depends critically upon the assumption that the proposed logical...

How do you show some conclusion of an argument cannot be derived in a complete system? Does one have to make the truth table to show that it is not valid and therefore, by definition it should be impossible for that conclusion to be derived?

This question has essentially been answered, I think, as question 3211 . In brief: A truth-table is one way to show that a conclusion cannot be derived, but it is not by definition that it cannot be. This is a consequence of the soundness theorem , which states that every derivable formula is valid. Since the truth-table shows the formula is not valid, then it cannot be derived since, otherwise, it would be valid. That said, a truth-table is not the only way to show that a formula cannot be derived. For one thing, there are also trees (or "tableaux"); for another, there are purely syntactic arguments that can be given to show, e.g., that a formula containing no more than one occurrence of any sentence letter cannot be derived. See question 3211 for a similar example.

Hello, I am asking this question after reading Richard Heck's answer to the question: 'Can God make a rock both big and small?'. It's more a terminological, really: What is the name of the philosophical approach that asserts logic is a universal law that cannot be broken (not even by God)? Thanks, Amit.

I'm afraid I don't know the answer to this question. But the discussion of the paradox on Wikipedia is pretty good and might somehow lead to an answer. If anyone does know or find out the answer, email me and I'll post it here.