What books would serve as a comprehensive overview of philosophical and mathematical logic?

These are very large fields, so I'm not sure a comprehensive overview is really possible. (Though there is the umpteen volume Handbook of Philosophical Logic .) That said, there is a new book out quite recently, by John Burgess, titled Philosophical Logic , that gives a good introduction to that area. For mathematical logic, I like Peter Hinman's Foundations of Mathematical Logic , as it covers a fairly wide range of material.

What do philosophers mean when they describe one claim as being "stronger" or "weaker" than another?

When someone says that a claim P is logically stronger than a claim Q, it usually just means that P implies Q, but not conversely. Thus, an argument sufficient to establish Q need not be sufficient to establish P. So it's good if your premises are "weak". Then you don't need such strong arguments to establish them.

Is it impossible that there be two recursive sets T and T* of axioms (in the same language) such that their closures under the same recursive set of recursive rules is identical and yet there is no recursive proof of this fact? It seems impossible but a simple proof of this fact would help elucidate matters!

I see the difference, and perhaps you are right about how the question should be interpreted. Of course, what makes it difficult to answer, in that form, is that the term "recursive", in "recursive proof", seems not to be doing any work. Perhaps what is meant is "finite proof" (as opposed to one containing infinitely many steps). But then, as you say, it is not clear what might be meant by asking whether some fact is or is not provable in some absolute sense, rather than in this theory or in that one. In that regard, it's perhaps worth pointing out explicitly that the argument you gave can be replicated for any extension U of PA. Let PA(U) be PA plus all statements of the form: n is not the Godel number of a U-proof of a contradiction. Then it will be provable in PA (and therefore in U) that PA and PA(U) have the same theorems iff U is consistent, and now the rest of the argument goes through. We can certainly start with theories even weaker than PA. I do not know how weak we can go.

Here are a couple ways of spelling out Peter's earlier remark. The first starts from the fact that S is a theorem of T iff T∪{¬S} is inconsistent. If T is a recursive set of axioms, then of course so is T∪{¬S}. To check if S is a theorem of T, then, you just need to see if T∪{¬S} has the same theorems as some known inconsistent theory. So if you could, in general, decide whether recursive theories prove the same theorems, you could decide whether an arbitrary sentence was a theorem of such a theory, which you can't, since there are undecidable theories. Note that this also shows that, if you could decide whether two theories have the same theorems, you could decide whether an arbitrary theory is consistent. But, in response, one might try restricting the claim to consistent theories. So begin instead with the fact that S is a theorem of T iff the theory T∪{S} has the same theorems as T. If T is recursive, so is T∪{S}. So if you could decide whether these prove the same theorems, you could again...

Perhaps what's confusing here is one of these each-all things that permeates this whole area. If T and T* have the same theorems, then EACH axiom of T will be provable in T*, and EACH axiom of T* will be provable in T, and of course it follows that there is, as a matter of fact, an algorithmic way of generating precisely those proofs. There is no general reason, however, to suppose that we will be able to prove that EVERY axiom of T is provable in T* nor that EVERY axiom of T* is provable in T. This is the same sort of contrast as between: PA proves EACH statement of the form: n is not a proof of 0=1; but PA does NOT prove: For all n, n is not a proof of 0=1.

David Hume famously pointed out that there seems to be a logical gap that prevents us from concluding "ought" from "is". It seem to me that the truth of this general observation is still under discussion. Does deontic logic shine any light on this question, as one would expect it to, or does the problem morph into the question which form deontic logic should take?

The question whether "is" implies "ought", in the most obvious form, is just the question whether: p --> Op, where "Op" means: p ought to be the case. We can consider deontic logics with and without that axiom, if we wish, and I suppose we might learn something from deontic logic about its consequences. But the formal study of deontic logic itself isn't likely to tell us whether we should accept that axiom, any more than the formal study of modal logic will tell us what principles concerning metaphysical necessity we should accept.

Did Bertrand Russell or any of the logicists ever reply or address Goedel's incompleteness theorem directly?

I do not think Russell every addressed it directly, and Frege died before Gödel did his work. It is possible that some of the positivistic logicists, like Hempel, did, but not so far as I know. That said, incompleteness has been raised as a problem for contemporary forms of logicism, generally known as "neo-logicism". (For an introduction, see this paper of mine or this paper , by Fraser MacBride.) I think the response to this objection is pretty straightforward, however. Take neo-logicism to be the view that the truths of arithmetic (we'll focus on arithmetic) are all logical consequences of some core set of principles that, though not truths of logic in any sense now available to us, have some claim to be regarded as "conceptual" truths, or "analytic" truths, or something in this general vicinity. The incompleteness theorem tells us that there can be no algorithmically computable ("recursive") set of principles from which all truths of mathematics follow, if but only if we assume that...

Is it possible that there exist types or methods of argument/reasoning that have not been discovered or employed before? (I do not mean specific arguments for specific problems, but Forms of arguments, so to speak.)

Sure, why not? Historically, there certainly have been. An example would be mathematical induction, which was known, in some form, to Euclid, but, so far as I know, is not present in earlier Greek mathematics. That seems to me to count if anything does. Now that was a long time ago, but why shouldn't there be such argument-forms that are unknown now even to us? Discovering one would be a very good thing, then!

How does Godel's incompleteness theorem affect the way that mathematicians understand and see mathematics as well as the world (if at all)? I'm not even close to a mathematician, but even a slight dose of the idea and theorem were enough to affect me so I suppose that I'm curious.

This depends in part upon what you mean by "mathematicians". Ordinary mathematicians, by which I mean mathematicians who aren't particularly or specially interested in logic, have generally, as a group, been utterly uninterested in Gödel's theorems. Reactions vary from case to case, and some are based on ignorance. But we do know, generally, that a huge proportion of "ordinary mathematics" can be done in what are, by the standards of set theorists, very weak theories. So the incompleteness of these theories tends not to be an issue. We've hardly exploited the strength they have. Another way to put this point is that, by and large, we know of very few "interesting" mathematical claims---claims that would be interesting to an "ordinary mathematician"--- that can be shown to be independent of these same, quite weak theories, let alone independent of Zermelo-Frankel set theory plus the axiom of choice, which is what most logicians would regard as sufficient to formalize the principles used in ordinary...

I want to start to study logic on my own to the level that I can understand a book like Enderton's and Michael Potter's Set Theory and Its Philosophy, also be able to study mathematical logic like recursive functions and model theory and Understand Logicist Reduction and why It failed because of Godel. Suppose that I've read nothing on logic and I'm not fair in math, but I want to be good at it. I'll be gleeful if you introduce me a series of books on a) Introductory books on logic followed by more professional ones, b) Mathematics of logic, c) books on Logicist Reduction and Godel or what is necessary to understand them. I have a plenty of time to study all these and will face all the troubles. Thank You Very Much Pouria From Turkish part of Iran (I think you guessed why I have to study on my own)

OK, so we're going to start with an introductory logic text. There are lots and lots of these, and people have very different views about what is best. But you could definitely do worse than to get a copy of our own Peter Smith's book . That has the advantage that you might then also read his book on intermediate logic , which covers Gödel's theorems, and flows naturally from the first. This book may be especially good for you, if, as I understood you to be saying, you aren't a math genius. While still remaining fairly rigorous, Peter keeps the technicalities to a minimum, so the book is accessible to more people than most such books. If you want something a little more technical, as well as wider ranging, then you might try Boolos, Burgess, and Jeffreys, Computability and Logic , which really gives a very solid introduction to basic recursion theory. Enderton's Introduction to Mathematical Logic covers the same sort of ground, but in a much more "mathematical" way. It is often used...

I am reading a book that explains Gödel's proofs of incompletenss, and I found something that disturbs me. There is a hidden premise that says something like "all statements of number theory can be expressed as Gödel numbers". How exactly do we know that? Can that be proved? The book did give few examples of translations of such kind (for example, how to turn statement "there is no largest prime number" into statement of formal system that resembles PM, and then how to turn that into Gödel number). So the question is: how do we know that every normal-language number theory statement has its equivalent in formal system such as PM? (it does seem intuitive, but what if there's a hole somewhere?)

Peter's explanation is as good as it could be, but let me elaborate on something. You will note that he keeps referring to the "standard proof" of the first incompleteness theorem. There are other proofs, and some of them do not rely upon a coding of this kind. Here's one nice way that stays very close to the "standard" proof. (There are also other, totally different sorts of proofs.) First, prove incompleteness for a simple theory of syntax, that is, a theory that is actually about symbols: sentences and the like. We can do this by looking at a theory that talks about its own syntax. (Quine sketches such a proof at the end of his book Mathematical Logic .) This part of the proof will look almost exactly like the standard proof, but minus the coding. The reason no coding is needed is that the theory really is one about symbols,and we can define things like sequences and take proofs to be sequencesof formulas, etc, etc. Second, establish the classical result that any theory that ...

Why Studying Model Theory is important to Philosophy? How much Math and Logic one needs to know to get down to study Model Theory? and does it have something to say about Hierarchical essence of scientific language?

I'm not sure studying model theory is important to philosophy, broadly speaking. For some areas, such as the philosophy of set theory, it surely is important that one know at least something about the basic results and techniques, since they are used regularly in set theory itself. But I doubt one really needs to know very much model theory to work in philosophy of mathematics generally. (I don't, and I do.) A general understanding of how what models are and how they are used in studying logical consequence is probably adequate, which means, roughly: What you'd get in any good sequence of logic courses. Of course, if you want to work in logic itself, then how much model theory you need to know will depend upon which branch of logic interests you.