Could there be more than a countably infinite number of propositions?

If I remember correctly, and I may well not, David Lewis explicitly argues that there are uncountably many propositions in Plurality of Worlds and uses this as an argument against any view that would try to reduce propositions to sentences. At the very least, he does consider this issue. So here's an argument that I think I remember from that book that we can consider, anyway. It is based upon the claim that, for any real number x , there ought to be a proposition---a possible content of thought---that I am shorter than x inches tall. Indeed, each such proposition could be expressed by a sentence. All we have to do is give the real number x a name, say, "Fred", and then the proposition will be expressed by the sentence "I am shorter than Fred inches tall". But if so, then there are at least as many propositions as there are reals. The key to this argument, note, is the observation that the claim "For every proposition p , there could be a sentence S that expressed it" is...

As commonly understood and reinforced here, 2 + 2 = 4 is taken as meeting the test for absolute certainty. This appears to be true in a formal or symbolic sense but is it true in reality? When we count two things as being the same and add them to two other same things do we really get four identical things? Perhaps, perhaps not; it may depend on one's identity theory. Do we know with absolute certainty when we have one thing and not two? What am I missing?

I don't myself have a view on whether 2+2=4 is absolutely certain. I suppose it's as certain as anything is or could be. But the question here is different. It's whether that certainty is undermined by doubts about what happens empirically. As Gottlob Frege would quickly have pointed out, however, the mathematical truth that 2+2=4 has nothing particular to do with what happens empirically. It might have been, for example, that whenver you tried to put two things together with two other things, one of them disappeared. (Or perhaps they were like rabbits, and another one appeared!) But mathematics says nothing of this. That 2+2=4 does not tell you what will happen when you put things together. It only tells you that, if there are two of these things and two of those things, and if none of these is one of those, then there are four things that are among these and those. It's hard to see how one's theory of identity could affect that.

When young children perform long division or multiplication, are they constructing a proof?

So I asked my brother about this, and he tells me that this kind of question is much discussed in the literature on mathematics education. Here's what he had to say: "Good thing to think about. A related idea I've been considering for some time---and maybe the difference is just a matter of cognitive development---is whether solving an equation algebraically is a proof. "Another spin on the idea---which is what got me thinking about it in the first place---is whether solving equations ought to be taught as proof, since every step one takes in algebraic solutions can be mathematically/logically justified through some equivalence that leads to the solution set. What most kids end up learning to do is to conduct the procedures of solving without any real understanding (or caring) of why what they are doing is mathematically justified. I have a sense that if solving were taught as proof, then it would be more natural for kids to pay attention to why certain steps that seem OK actually introduce...

Sure, why not? What they construct---I take it we mean, "write down"---is the very same proof you or I might construct. So they've constructed the same thing we would, so it's a proof. But there is a different question you might ask, namely, do they understand it as a proof? And that, I take it, is an empirical question. Let me ask my brother, who works on mathematics education, and see what he has to say....

What are numbers? Are they unquestionably EVERYTHING? Let's take 17 and 18 for example: Isn't there an infinite amount of numbers that exist between 17 and 18? There is no such thing as the smallest number, and there is no such thing as the largest number. WHY?!

Well, there are a lot of questions there. I won't try to answer the first one: That's a topic for a book, not an internet posting. And I'm not sure I understand the second one. But regarding the next two, yes, of course there are infinitely many numbers between 17 and 18. Here are some of them: 17.1; 17.11; 17.111; 17.1111; etc. And between any two of those, there are infinitely many more. But if you want something that will really make your head spin: Consider all the rational numbers, that is, all the numbers that can be written as fractions m/n. There are no more of those numbers than there are "natural numbers", like 0, 1, 2, etc., and that is true even though there are infinitely many rational numbers in between any two natural numbers. The proof of this is not terribly difficult. The point is that we can order all the rational numbers like this: 1/1, 1/2, 2/1, 1/3, 2/2, 3/1, 1/4, 2/3, 3/2, 4/1, .... (What's the pattern?) Some rational numbers occur more than once, of course, but they can be...

We generally hold that a mathematical proposition such as "2 + 2 = 4" is necessarily true; it is difficult to imagine a possible world in which it is false. However, is it possible that "2 + 2 = 4" is not a statement that expresses a mathematical necessity (or an operation involving numeric values that must provide a certain result), but rather presents an inductive inference based on how people currently "define" the number "2", and the operator "+"? We could, for example, someday come to discover that "2" does not represent "2 things or ideas"; what we call 2 things may turn out to be 3 things, or 1 thing, etc. If this is possible then it would seem that "2 + 2 = 4" is an empirical, not a rational truth. Is this intelligible? I realize that this last statement, that we could discover 2 to refer to 3 things, etc., entails a theory of what a number is, i.e. a number "represents a quantity or amount of something". It seems, though, that in order to conclude that "2 + 2 = 4" is a necessary truth we must...

Perhaps the first thing to say here is that we need to distinguish the question whether it is necessary that 2+2=4 from the question whether the sentence "2+2=4" is necessarily true. It seems to me that no sentence is necessarily true. Any sentence might have been false, simply because that sentence might have meant something other than what it in fact means. For example, "2+2=4" might have meant that 3+3=4, and then it would have been false. And that is what it would have meant had "2" meant 3 rather than what it does mean, namely, 2. So I agree absolutely the whether "2+2=4" is true depends upon how we what "2" and "+" and "4" and "=" all mean, not to mention the grammatical rules that govern the significance of combining them in a certain way. And if you want to put that by saying that the truth of this sentence depends upon how we "define" the numeral "2", I won't object. Not too strongly, anyway. But it is an entirely different question whether it is necessary that 2+2=4. That is not at...

Are the infinitely small and the infinitely large the same thing?

It would help to be told why one might think they were. But in mathematics, no, they are not. Something that is infinitely small---a so-called infinitesimal---is something that is smaller than anything of finite size. Something that is infinitely large is something that is larger than anything of finite size. So if they were the same, something would have to be both larger and smaller than anything of finite size. And that ain't gonna happen.

Does there exist a type of thing which could be called a mathematical fact? That is, are there true entities which would exist even if there were no minds to do the maths to discover and describe them? In other words, it is the understanding of all numerate human beings that the square root of 81 is 9. Would the square root of 81 still be 9 if there were no minds, human, numerate or otherwise?

There are lots of physicists who study the history of the universe: how the universe began, for example. When they do their calculations concerning, say, the evolution of the universe in the few seconds following the big bang, they do seem to assume that the square root of 81 was 9 even then, when there were no minds. And more generally, it's rather hard to see how the existence or non-existence of minds could affect what the square root of 81 is. Might 81 itself not have existed had there been no minds? How precisely did the existence of minds bring it into being? Was it just impossible before there were minds for there to be 81 stars in a certain region of space? I think not.

Consider a first-order axiomization of ZFC. The quantifiers range over all the sets. However, we can prove that (in ZFC) there is no set which contains all sets. Soooo.........how can we make a _model_ for ZFC? The first thing you do when you make a model for a set of axioms is specify a domain, which is a set of things which the quantifiers range over......this seems to be exactly what you can't do with ZFC. So what am I missing?

This kind of concern has had a good deal of influence on research in logic over the last several decades. It was, for example, a major force behind Boolos's work on plural quantification. More recently, there has been an explosion of research on what is called "absolutely universal" quantification: quantification over absolutely everything, including all the sets there might be. As you note, there is no "model" of such discourse, in the usual sense; that is, the "intended interpretation" of such discourse cannot be a model, in the usual first-order sense. As Dan noted, one can talk about proper class models, but there is another line of inquiry, deriving from Boolos. One way to develop this approach is to take the domain of the interpretation to be a `plurality', so that the quantifiers range over the sets ---not the set ofall sets, or a class of all sets, but simply over the sets, whatever sets there may be. The details have to be worked out here, but it can be done, and in a reasonable way, too...

Is mathematics somehow "scientific"? Let me explain. There is a sense in which scientific theories are ad hoc. We have a set of relevant observations, and we try to formulate a theory which (1) accounts for all of them and (2) is parsimonious. A theory here is just an explanatory principle tailored to capture the data we want. What we don't do is deduce scientific theories from foundational principles. Axioms in math often strike me as very much like this. The only difference is that the "data" or "observations" of interest here are our intuitions about mathematics (e.g., that A+B=B+A). When I look at the axioms of ZF set theory (for example), I don't see where they're supposed to be coming from; rather, they're just one ad hoc way of justifying propositions we feel must be justified. Isn't there something weird, though, about tweaking one's axioms to fit one's intuitions?

There are different ways of approaching axiomatization. One is more "top down". You have a pretty good idea what the truths are about a particular subject matter, and the problem is to find some reasonably managable set of principles from which those truths all follow. Axiomatizations of logic itself might be so construed. It's arguable that the fundamental notion here is really validity, the semantic notion, defined in terms of interpretations and the like, and then the problem is to find a set of axioms from which all the valid formulae will be derivable. Whether the axioms have some intuitive basis may be neither here nor there. Of course, that needn't be the only way of looking at the matter, and it doens't seem terribly plausible in the case of set theory, especially after one's naivete has been shattered by the paradoxes. Here more of a "bottom up" perspective might seem appropriate: One might think that the axioms of set theory ought in fact to have some kind of intuitive basis. And, as it...

I’ve run into a problem in philosophy recently that I do not completely appreciate. Certain sets are said to be “too big” to be sets. In Lewis’ Modal Realism, the set of all possible worlds is said to be one such set. These are sets whose memberships is composed of infinite individuals of a robust cardinality. I (purportedly) understand that not all infinities are equal. But I don’t quite see why there can be a set of continuum many objects, but not a set of certain larger infinities. Am I misunderstanding what it is to have “too big” a set?

As Alex says, in Lewis's case, he's really pointing towards an idea familiar from the philosophy of set theory. Not all "collections" of objects can form sets: The assumption that they do leads to contradiction. (Of course, we need some logical assumptions to get that contradiction, and these could be denied. And one might also not think contradictions are all that bad. But let's not go there now.) And given the standard axioms of set-theory, we can prove that there is no set containing every object and, moreover, that there is no set that can be put in 1-1 correspondence with all the objects there are. But the standard concept of set, as embodied in Zermelo-Fraenkel set-theory, is not in any way motivated by the idea that a set cannot be "too big". It is based upon a very different idea, called the "iterative"conception of set, though there is some question about whether the Axiom ofReplacement (which was Fraenkel's distinctive contribution) is really motivated by the iterative conception or...

Pages