Is Russell's Paradox a problem for our confidence that 2+2=4 is true? I've never understood how big a problem it represents in math. Does it throw everything into doubt, or just some things? The Stanford Encyclopedia entry is a bit technical.

Russell's Paradox is a problem for set theory--or at least it was when Russell discovered it. The most popular modern approach to set theory is based on the axioms developed by Zermelo and Frankel, and the Zermelo-Frankel (ZF) axioms are formulated to avoid the paradox. So Russell's Paradox is not a problem for modern set theory. The reason paradoxes in set theory are considered to be such a serious matter is that most mathematicians regard set theory as the foundation of all of mathematics. Virtually all mathematical statements can be formulated in the language of set theory, and all mathematical theorems--including your example 2+2=4--can be proven from the ZF axioms. But you ask about our "confidence" that 2+2=4 is true. I don't think anyone's confidence in 2+2=4 is based on the fact that it is provable in ZF set theory, even though ZF is regarded as the foundation of mathematics. It's hard to imagine anyone having serious doubts about whether or not 2+2=4, and having those doubts relieved...

This is more like a comment to the question in Mathematics that starts with: "If you have a line, and it goes on forever, and you choose a random point on that line, is that point the center of that line? And if you ..." The answer provided by the panelist, as well as the initial question, assume that one can distinguish between points at infinity. As far as Math goes however, one cannot do that, and this is the reason the limit for cos(phi) does not exist, as phi goes to infinity. Revisiting the argumentation provided by the panelist, the error starts with the 'definition' of the distance between a fixed point and infinity - this distance cannot be defined, and therefore it cannot be compared (at least, as math goes). A somewhat similar problem can be stated, without the pitfalls of the infinity concept, for a point on a circle, or any closed curve.

It seems to me that you are reading things into the original question, and my answer to it, that were not there. I do not see, either in the original question or in my answer, any reference to "points at infinity". The orignal question talks about a line going on forever, and my answer talks about the line extending infinitely far in either direction from some point P on the line. But this just means that for every number x, there are points on the line more than x units away from P in either direction, not that there are points that are infinitely far away from P. I claimed that the parts of the line on either side of P are congruent, and you can see this by observing that if you rotate the line 180 degrees around P, each side gets moved so that it coincides with the other side. My previous answer was based on a particular definition of "center". There is another, slightly different definition of "center" that could lead to the sorts of worries that you raise. Suppose we define the center point...

How do you tell the difference between a reductio and a surprising conclusion?

There's an interesting example from mathematics that might be relevant here. The Axiom of Choice (AC) is an axiom of mathematics that was quite controversial when Zermelo introduced it in 1904. It is less controversial today, perhaps because Godel showed in 1940 that if the other axioms of mathematics are consistent, then adding AC cannot introduce a contradiction into mathematics. But AC does lead to some very surprising conclusions. One of the most famous is the Banach-Tarski Theorem , sometimes also called the Banach-Tarski Paradox because it is so surprising. The Banach-Tarski Theorem says that it is possible to decompose a ball of radius 1 into a finite number of pieces and rearrange those pieces to make two balls of radius 1. (The "pieces" are actually more like clouds of scattered points that, together, fill up the entire ball.) Should the Banach-Tarski Theorem be considered a reductio ad absurdum proof that AC is false? It certainly doesn't count as a mathematical proof that AC is...

Is self-contradiction still the prima facie sign of a faulty argument? How do we tell an apparent contradiction from a real contradiction if the argument is in words? (Most of us don't know how to translate arguments in words into symbolic logic.)

It is perhaps worth adding that self-contradiction is not the only sign of a faulty argument. An argument can be faulty but not lead to a contradiction. For example, suppose that you know that some number x has the property that x 2 = 4. If you claim that x must be 2, you have engaged in faulty reasoning. The conclusion x = 2 does not contradict the hypothesis that x 2 = 4; the two statements are perfectly consistent. But your reasoning is faulty because you haven't taken into account the possibility that x might be -2.

If you have a line, and it goes on forever, and you choose a random point on that line, is that point the center of that line? And if you picked a new point, would that become the center of the line (since to either side of the point is infinity, and infinity is congruent to infinity)? Also if the universe has no middle and no end, am I, and everyone, at the center of the universe? (Of course the middle of the universe thing only works if you believe the universe has no middle and no end.)

As with so many questions in mathematics, the answer will depend on exactly how you define your terms. In this case, we will have to decide how to define the word "center". Now, you hint at a possible definition in your question, when you speak of the parts of the line on either side of a point as being congruent. Let's make this definition explicit. Suppose we define a center point of a line or a line segment to be a point with the property that the parts of the line or line segment on either side of that point are congruent. Then, for example, in a line segment of length 1 inch, the point that is 1/2 inch from each end will be the unique center point of the segment; the parts of the segment on either side of that point both have length 1/2, and are therefore congruent. But if we apply this definition to a line that extends infinitely far in both directions, then we find that every point is a center point, because, as you observe, the parts of the line on either side of any point extend...

I recently heard about mathematical paradoxes and I have a perhaps strange question: It seems to me that the goal is to figure out what the fundamental problem is, i.e. what gives rise to the paradox, so we can perhaps rewrite the axioms so that the problems disappear. But why not just say: "Well, paradoxes arise when you talk about sets that contain every set, so let's avoid talk about sets that contain empty sets." (Kind of like saying that bad things happen when you divide something with zero, so don't do it!)

It's not so easy to know if you've avoided talk that could lead to a contradiction. Is it OK to talk about a set that contains all sets but one? All sets but two? No--it turns out those sets lead to contradictions too. What if you don't explicitly refer to a set that contains all sets, but such a set is used implicitly in some piece of reasoning? Where do you draw the line? How do you know if you've crossed the line? Rewriting the axioms is a way of drawing the line. One way to see why avoiding contradictions is so important is to think about proof by contradiction. To prove a statement P, mathematicians sometimes assume that P is false and then try to deduce a contradiction. This method of proof is based on the idea that if you can deduce a contradiction, then the assumption that P was false must be incorrect, so P must be true. But if contradictions can arise even if you haven't made a false assumption, then you'll be able to use proof by contradiction to prove false statements. (In fact,...

Pages