Is it possible to prove within ZF(C) that the set of all proper infinite subsets of set N has cardinality that is strictly greater than |N|?

I assume that N here denotes the set of natural numbers: N = {0, 1, 2, 3, ...}. Yes, it is possible to prove this. And the axiom of choice isn't needed, so you can prove it in ZF. Here's one way to do it: Let E = {2, 4, 6, ...} and O = {1, 3, 5, ...}. Thus, E and O are the sets of even and odd positive integers. (Notice that I have left out 0). Now consider the collection of all sets of the form E U X, where X is a subset of O. Every set in this collection is infinite, since it includes all even integers. And every set is a proper subset of N, since it doesn't include 0. There is one such set for each subset X of O, so the cardinality of this collection of sets is the same as the cardinality of the set of all subsets of O (the power set of O). And since O can be put into one-to-one correspondence with N, the cardinality of the power set of O is the same as the cardinality of the power set of N, which, as Cantor showed, is strictly greater than |N|. Note: When I say that the cardinality of a...

Is it true that anything can be concluded from a contradiction? Can you explain? It's seems like its a tautology if taken figuratively because we can indeed conclude anything if we suspend the rules of reasoning, but there is nothing especially interesting in that fact in my humble opinion.

Stephen Maitzen has given a syntactic response, showing how formal rules of logic can be used to derive any conclusion Q from a contradiction P & not-P. It might be worthwhile to point out that one can also give a semantic explanation of why Q follows from P & not-P--that is, an explanation based on the truth or falsity of the premise and conclusion, rather than on rules for manipulating logical symbols. The semantic definition of logical consequence is this: We say that a conclusion follows from a collection of premises if, in every situation in which the premises are all true, the conclusion is also true. To put it another way: the conclusion fails to follow from the premises if (and only if) there is some situation in which all the premises are true, but the conclusion is false. For example, the conclusion "It is snowing" does not follow from the premise "It is either raining or snowing," because there is a situation in which the premise is true but the conclusion is false-...

Is it impossible that there be two recursive sets T and T* of axioms (in the same language) such that their closures under the same recursive set of recursive rules is identical and yet there is no recursive proof of this fact? It seems impossible but a simple proof of this fact would help elucidate matters!

I find the original question a little unclear, because I'm not sure what a "recursive proof" is. Peter and Richard have interpreted the question as asking whether or not there is an algorithm to tell, in general, whether or not any given recursive sets of axioms T and T* have the same theorems. This is one possible interpretation, and Richard has given a very nice proof that the answer is no. But the original question does not seem to be asking about a general solution for all recursive sets of axioms T and T*. Rather, it seems to be asking whether there can be a single pair of recursive sets of axioms T and T* such that T and T* have the same theorems, but this fact is, in some sense, not provable. Let me give an example that might respond to this interpretation. For each natural number n, let P(n) be the statement "n is not the Godel number of a proof of a contradiction in Zermelo Frankel (ZF) set theory." This is expressible in the language of number theory. Let T be the set of Peano axioms...

Yes, I agree, under the alternative interpretation I proposed the word "recursive" in "recursive proof" is doing no work. So my interpretation is not entirely satisfactory either. I don't know which interpretation was meant, but both seem to be interesting questions. As for the generalization to any extension U of PA: Presumably we're talking about recursive extensions of PA. (Well, we do have that occurrence of the word "recursive" floating around that isn't doing any work; perhaps we can use it here.) But there's one other issue. In my argument, I made use of the fact that the consistency of PA is provable in ZF. But the consistency of PA is not provable in PA, so I don't think it's going to be true in general that you can prove in PA that PA and PA(U) have the same theorems iff U is consistent. But I think you can prove it in the theory PA + Con(PA) (that is, the theory PA with an additional axiom asserting that PA is consistent). So as long as U is strong enough to prove...

Is this for philosophers, mathematicians, or logicians? But here goes: Given that the decimal places of pi continue to infinity, does this imply that somewhere in the sequence of numbers of pi there must be, for instance, a huge (and possibly infinite) number of the same number repeated? 77777777777777777777777777... , say? If Pi goes on forever, you might think it must be. After all, if you checked pi to the first googol decimal places you obviously would't find an infinite number of anything. Try a googlplex! Still nothing. But we haven't scratched the surface, even though the universe would have fizzled out by now. If pi's decimal places go on forever, there may be, (not just 77777777777777... or 1515151515151) but all of them, in all combinations, forever. After all, you only have to say "You've only checked a googolplex. There's still an infinite number to to check. The universe is long gone, but pi goes on and on." Philosophers, mathematicians, logicians, any ideas? Mark G.

There are two questions here that need to be distinguished: (1) Does the decimal expansion of pi contain a large but finite string of consecutive 7's--say, 1000 consecutive 7's? (2) Does it contain an infinite string of consecutive 7's? The second question is the easier one. The only way that pi could contain a string of infinitely many consecutive 7's is if all the digits are 7's from some point on. And, as William has pointed out, that can't happen because pi is irrational. But the first question is harder. The digits of pi "look random." Imagine a number whose digits are generated by some random process--for example, we might roll a 10-sided die repeatedly to generate the digits. One could compute the probability that a string of 1000 consecutive 7's appears in the first n digits of this number. For n =1000, this number would be extremely small--you'd have to roll 1000 consecutive 7's on your first 1000 rolls, and that's very unlikely. But as n increases, the probability increases, and...

In the probability thread, multiple philosophers mention examples of zero-probability events that aren't necessarily "impossible" (like flipping an infinite number of heads in a row). Arriving at a probability of zero in these instances relies on saying that 1/infinity = 0. But this math seems misleading. Don't mathematicians rely on more precise language to avoid this paradoxical result, by saying that "the limit of 1/x as x approaches infinity = 0," rather than simply "1/x = 0"? I feel like there must be some way to distinguish (supposedly) zero-probability events that are actually possible and zero-probability events that are impossible. Thanks!

To answer this question, it may be helpful to say something about the mathematical formalism usually used in probability theory. The first step in applying probability theory to study some random process is to identify the set of all possible outcomes of the process, which is called the sample space . For example, in the case of an infinite sequence of coin flips, the sample space is the set of all infinite sequences of H's and T's (representing heads and tails). Probabilities are assigned to events , which are represented by subsets of the sample space. For example, in the case of an infinite sequence of coin flips, the set of all HT-sequences that start with H represents the event that the first coin flip was a heads, and (assuming the coin is fair) this event would have probability 1/2. The set of sequences that start with HT is a subset of the first one, and it represents the event that the first flip was heads and the second tails; it has probability 1/4. Now, consider some infinite HT...

A friend posed a problem that according to him reveals an inconsistency in mathematics. There are two envelopes with money in them, and you are given one envelope. One envelope has twice the amount of money as the other, but you don't know which one is which. The question is, if you are trying to maximize your money, after you are given your envelope, should you switch to the other envelope if given the chance? One analysis is: let a denote the smaller amount. Either you have a or 2a in your envelope, and you would switch to 2a or a, respectively, and since these have the same chance of happening before and after, you don't improve and it doesn't matter if you switch. The other analysis is: let x denote the value in your envelope. The other envelope has either twice what is in yours or it has half of what is in yours. Each of these has probability of .5, so .5(2x) + .5(.5x) = 1.2x, which is greater than the x that you started with, so you do improve and should switch. Is there something wrong with...

I'd like to add a little bit to what Thomas has said. Probability problems can be tricky because the answers sometimes depend on small details about exactly what procedure was followed. For example, the problem says that "you are given one envelope." Who gave you the envelope? Did the person who gave you the envelope know which envelope was which? Was he a very stingy person, who might have been more likely to give you the envelope with the smaller amount of money? If so, then the probability that you have the smaller amount might not be 1/2. But that is clearly not the intent of the problem, so let us assume that the person who gave you the envelope flipped a coin to decide which envelope to give you. Then, as Thomas says, the probability is 1/2 that you have the small amount and 1/2 that you have the large amount. Suppose that you open your envelope and find $100 in it. You now know that the other envelope contains either $50 or $200. Do these two outcomes still have probability 1/2 each? ...

Consider a first-order axiomization of ZFC. The quantifiers range over all the sets. However, we can prove that (in ZFC) there is no set which contains all sets. Soooo.........how can we make a _model_ for ZFC? The first thing you do when you make a model for a set of axioms is specify a domain, which is a set of things which the quantifiers range over......this seems to be exactly what you can't do with ZFC. So what am I missing?

You are right that the first thing you do when you make a model for set theory is to specify a domain. You also have to specify an interpretation for the "is an element of" symbol. For example, you might specify that the domain is to be the set of natural numbers, and the symbol for "is an element of" is to be interpreted to mean "is less than". Of course, under this interpretation most of the axioms of set theory would come out false, so although this is a possible interpretation for the language of set theory, it is not a model of ZFC. But if ZFC is consistent, then it has a model whose domain is the set of natural numbers. In this model, the "is an element of" symbol would be interpreted as some relation on the natural numbers. (The existence of such a model follows from the Lowenheim-Skolem theorem, which says that if ZFC is consistent, then it has a countable model.) Now, you might object that this model is not the intended model, because our intention is that the variables...

How much math should I know in order to delve really deeply into philosophy of mathematics? Must philosophers of mathematics be mathematicians, as well?

Philosophers of mathematics don't have to be mathematicians, but it would be helpful to know a fair amount of math. Here are some more specific suggestions: 1. You need to study enough math to appreciate the role of proofs in mathematics. Usually students don't see this until they get beyond calculus. Courses on analysis and abstract algebra would show you this side of mathematics. (Abstract algebra would also allow you to see the role of abstraction in math.) 2. There are ideas from logic, such as Godel's incompleteness theorems, that are important in philosophy of math, so you should study logic. 3. You should know about how some of the fundamental objects studied in mathematics are defined. For example, how are the real numbers defined? Usually mathematicians trace the ideas behind the definitions of these fundamental mathematical objects back to set theory, so it would be good to learn some set theory.

Are there as many true propositions as false ones? More of one than the other?

A few comments on the answers from Professors George and Pogge: 1. If two statements are logically equivalent, do we think of them as expressing the same proposition or two different propositions? If we think of logically equivalent statements as expressing the same proposition, and we use classical logic, in which "it is not the case that it is not the case that P" is logically equivalent to P (and if we ignore, for the moment, Prof. Pogge's worries about meaningless statements), then there's no problem with Prof. George's original pairing. In the example given by Prof. Pogge, "Bush is married" and "it is not the case that it is not the case that Bush is married" express the same proposition, and that proposition is paired with the proposition expressed by "it is not the case that Bush is married". 2. If we do not group together logically equivalent statements as suggested in 1 above, then as Prof. Pogge points out, Prof. George has paired each true statement with a false one, but some false...

Can sentences refer to themselves? Take the "Liar" paradox: (1) This sentence is false. Does "This" really refer to the sentence I've labeled as (1)? Can sentences predicate properties to themselves in this way?

Your sentence (1) is often given as an example of a paradoxical sentence. The paradox arises from interpreting the phrase "this sentence" as referring to sentence (1) itself. If the sentence is true, then what it says is incorrect, so it is false. But if it is false, then what it says is correct, so it is true. It is tempting to think that the problem is caused by the interpretation of the phrase "this sentence". I suspect that this is the motivation for your question. Can we avoid the problem by saying that the phrase "this sentence" can't be interpreted in this way? The answer is no: the paradox can be created without using this phrase. To see how to do it, first note that it is possible for a sentence to talk about a linguistic expression. The usual way to accomplish this is to use quotation marks. For example, here is a sentence that talks about sentence (1): (2) "This sentence is false" is a paradoxical sentence. Of course, there's nothing paradoxical about sentence (2). ...

Pages