Does infinity exist?

Well, mathematicians all the time talk about infinitary structures. To start with the very simplest examples, they talk about the set of all natural numbers, {0, 1, 2, 3, ...}: and they also talk about the set of all subsets of the natural numbers. And they introduce "infinite cardinal numbers" which indicate the size of such infinite sets. There's a beautiful theorem by Cantor which shows that, on a very natural understanding of "number of members", the set of all numbers has a smaller number of members than the set of all subsets of the set of all numbers. So there are different infinite cardinal numbers, which we can order in size. Indeed, again on natural assumptions, there's an infinity of them. Is that little reminder about what mathematicians get up to enough to settle the question whether "infinity exists"? Well, perhaps not. There remain a number of questions here. Here's one (so to speak) about the pure mathematics, and one about applied mathematics. First, then, we might say "Sure,...

Hi, I've been reading about transfinite cardinal numbers and was wondering if you could answer this question. Supposedly the set of integers has the same cardinality as the set of even integers (both are countably infinite) since there exists a bijection between the two sets. But at the same time, doesn't there also exist a function between the set of even integers and the set of integers that is injective while NOT bijective (g(x) = x), since the image of f does not compose all of the integers (only the even ones)? To clarify, let f and g be functions from the set of EVEN integers to the set of ALL integers. Let f(x) = 1/2 x, and g(x) = x. Both are injective functions, but f is onto while g is not. So f is a bijection, while g is merely 1-to-1. Why, then, can I not say that the set of even integers and the set of all integers do NOT have the same cardinality since there exists some 1-1 function that is not onto? It seems like I should be able to draw this intuitive conclusion since g is 1-1, so for...

The background issue here is: what's the best way of extending our talk of one set's being "larger" than another from the familiar case of finite sets to the infinite case? Now, in the finite case, we can say that [L] the set A is larger than the set B if and only if there is a 1-1 mapping between B and some proper subset of A (i.e. there is an injection from B into A , which isn't onto). But equally, in the finite case, we can say that [C] the set A is larger than the set B if and only if there is a 1-1 mapping between B and some proper subset of A , but no 1-1 mapping between B and the whole of A. In finite cases, [L] and [C] come to just the same: but in infinite cases they peel apart. [L] says the set of naturals is larger than the set of evens; [C] denies that. So isn't [L] the more intuitive principle to adopt? No! For take the case where A and B are both the very same set, the natural numbers. There is a 1-1 mapping from the...

Do numbers exist?

Here's a simple argument. (1) There are four prime numbers between 10 and 20. (2) But if there are four prime numbers between 10 and 20, then there certainly are prime numbers. (3) And if there are prime numbers, then there must indeed be numbers. Hence, from (1) to (3) we can conclude that (4) there are numbers. Hence (5) numbers exist. Where could that simple argument be challenged? (2) and (3) look compelling, and the inference from (1), (2) and (3) to (4) is evidently valid. So that leaves two possibilities. We can challenge the argument at the very end, and try to resist the move from (4) to (5), saying that while it is true that there are such things as numbers, it doesn't follow that numbers actually exist . This response, however, supposes that there is a distinction between there being X s and X s existing. But what distinction could this be? Well, someone could use mis-use "exists" to mean something like e.g. "physically exists": and of course it doesn't follow from...

I have a very vague understanding of Goedel's famous Incompleteness theorem, but I know enough to know that I see it constantly interpreted in what seem like bizarre ways that I am sure anyone who really knew the relevant math or logic or philosophy would find ridiculous. The most common of these come from "new age" sources. My question is, for someone who knows something about the theorems, what is it about them that you think attracts these sorts of odd and (to say the least) highly suspect interpretations? I mean you don't see a lot of bizarre interpretations of most technical theories/proofs in math, logic, or philosophy.

You are quite right that Gödel's (first) incompleteness theorem attracts all kinds of bizarre "interpretations". Various examples are discussed and dissected in Torkel Franzen's very nice short book, Gödel's Theorem: An Incomplete Guide to its Use and Abuse , which I warmly recommend. My guess is that a main source for the whacky interpretations is the claim that has repeatedly been made that the theorem shows that we can't be "machines", and so -- supposedly -- we must be something more than complex biological mechanisms. You can see why that conclusion might in some quarters be found welcome (and other technical results in logic generally don't seem to have such an implication). But as Franzen explains very clearly, it doesn't follow from the theorem.

This question is directed (mainly) to Peter Smith. I've read you "Introduction to Gödel's Theorems" (that's how I ended up here) and found it fascinating. At a certain point it the book, it is asserted that G (that is, a Gödel Sentence) is Goldbach type. My question is the following, what are the odds (I don't mean statistically, just your opinion) that the Goldbach conjecture is in some manner an example of a Gödel Sentence naturally (?) arising? I am aware that most mathematicians believe the Goldbach Conjecture to be true, even if all attempts to prove it have failed so far. So, could it be that it actually is true, but to be proven, additional axioms would have to be added to regular arithmetic, or the former would have to be modified in some fashion? Has anyone tried to prove this? Have they succeeded? Sorry for the messy English, I hope my question can be understood, and thanks for writing such an interesting book.

I'm really glad you enjoyed the Gödel book! Suppose that S is Goldbach's conjecture. And suppose theory T is your favourite arithmetic (which includes Robinson Arithmetic). Then Theorem 9.3 applies to S . So if not- S is not logically deducible from T , then S must be true. So if we had a proof that S is a "naturally" arising Gödel sentence -- i.e. a demonstration that T proves neither S nor not- S -- we'd ipso facto have a proof that S is true. That means that establishing that that S is a "naturally" arising Gödel sentence for T -- if that's what it is -- is at least as hard as proving Goldbach's Conjecture itself. Which, the evidence suggests, is very hard! As to the "odds": my hunch is that GC is true, and can be proved in PA -- but I wouldn't bet even a decent meal out on it!!

I know that there are some serious problems concerning the idea that mathematics is grounded on logic. But computers can perform mathematical operations, and computers use logic, so I think that at least for practical purposes we can use logic to support mathematics. Am I right? My second question is this: can we infer that 2+2=4 from the principle of non-contradiction? Thank you!

You need to distinguish the claim that mathematics is grounded on logic, and the claim that mathematics uses logic. The weaker second claim is evidently true, at least in this sense. Mathematical reasoning is a paradigm of good deductive reasoning. And standard systems of logic explicitly aim to codify, more or less directly, the kinds of good deductive reasonings that mathematicians use. (And computers might be used to echo some such reasonings too.) But the fact that mathematics uses logical reasoning doesn't show that mathematics is grounded on logic if that is the much stronger thesis that at least arithmetic, maybe the whole of classical analysis, just follows from pure logic plus definitions of mathematical notions in logical terms. (I take it that it is this logicist thesis which you are thinking of, when you say that there are "serious problems" about the idea that mathematics is grounded on logic). For example, you might think that in set theory we use logic to...

Why does mathematics "work"? How does it manage to describe the physical world?

Which mathematics manages to describe the physical world? Mathematicians offer us, e.g., Euclidean and non-Euclidean geometries of spaces of various dimensions (and the non-Euclidean geometries come in different brands). They can't all correctly describe the world, since they say different things even about such simple matters as the sum of the angles of a three-dimensional triangle. But we hope that one such geometry does indeed describe the sort of structure exemplified by physical space (or better, physical space-time). That's pretty typical. Mathematicians explore all kinds of different possible structures. Only some of them are physically exemplified. For example, group theories explore patterns of symmetries; some of the patterns are to be found in the world -- but I guess that no one thinks e.g. that the Monster Group is physically instantiated. Mathematical physicists tell us which kinds of structures are to be found in the physical world and then use the appropriate mathematics to...

A few things here. First, would someone like Kurt Gödel be considered a philosopher of math, a logician, or a mathematician? Maybe all three (or something else not listed)? And what are the differences between the three? Thanks.

A philosopher of mathematics is interested in questions like: what are numbers? what kind of necessity to arithmetical truths have? how do we know the basic laws of arithmetic are true? what about sets -- do they really exist over an abover their members? is there a universe of sets? there are various set theories, how can we decide which is true of the universe of sets? And so on. You don't have to be a working mathematian to think about these matters (though obviously you have know a little about the relevant mathematics you want to philosophize about). Nor do you have to be a logic-expert. Most mathematicians aren't interested in the philosophy of mathematics (just as most scientists aren't interested in the philosophy of science, and most lawyers aren't interested in the philosophy of law). They just go about doing their maths. And among mathematicians, a serious interest in logic is a rarity: you can certainly be a mathematician without being a logician (e.g. by being a fluid-dynamicist or an...

Since one's own reasoning is a basically set of rules of inference operating on a set of axiomatic beliefs, can one reliably prove one's own reasoning to be logically consistent from within one's own reasoning? Might not such reasoning itself be inconsistent? If our own reasoning were inconsistent, might not the logical consistency (validity) of such "proofs" as those of Godel's Incompleteness Theorems, be merely a mirage? How could we ever hope to prove otherwise? How could we ever trust our own "perception" of "implication" or even of "self-contradiction"?

This question raises a number of issues it is worth disentangling. It is far from clear that we should think of our reasoning as "operating on a set of axiomatic beliefs". That makes it sound as if there's a foundational level of beliefs (which are kept fixed as "axioms"), and then our other beliefs are all inferentially derived from those axioms. There are all sorts of problems with this kind of crude foundationalist epistemology, and it is pretty much agreed on all sides that -- to say the least -- we can't just take it for granted. Maybe, to use an image of Quine's, we should instead think of our web of belief not like a pyramid built up from foundations, but like an inter-connected net, with beliefs linked to each other and our more "observational" beliefs towards the edges of the net linked to perceptual experiences, but with no beliefs held immune from revision as "axioms". Sometimes we revise our more theoretical beliefs in the light of newly acquired, more observationally-based, beliefs...

I'm a mathematician looking at some of the work of Leonhard Euler on the "pentagonal number theorem". My question is about how we can know some statement is true. Euler had found this theorem in the early 1740s, and said things like "I believed I have concluded it by a legitimate induction, but at the same time I haven't been able to find a demonstration" (my translation), and that it is "true even without being demonstrated" (vraies sans etre demontrees). This got me thinking that "knowing" something is not really a mathematical question. A proof lets us know a statement is true because we can work through the proof. But a mathematical statement is true whether we know it or not, and if you tell me you know that a statement is true, and then in fact someone later proves it, I can't show mathematically that you didn't know it all along. This isn't something I have thought about much before, and my question is are there any papers or books that give some ideas about this that would be approachable by...

Perhaps there are two different questions here. There's a very general question about truth and proof; and there's a much more specific question about the sort of case exemplified by Euler, where a mathematician claims to know (or at least have good grounds for) a proposition even in the absence of a demonstrative proof. Let's take the specific question first, using a different and perhaps more familiar example. We don't know how to prove Goldbach's conjecture that every even number greater than two is the sum of two primes. Yet most mathematicians are pretty confident in its truth. Why? Well, it has been computer-verified for numbers up to the order of 10 16 . But so what? After all, there are other well-known cases where a property holds of numbers up to some much greater bound but then fails. [For example, the logarithmic integral function li ( n ) over-estimates the number of primes below n but eventually under-estimates, then over-estimates again, flipping back and forth, with...

Pages