Could math have possibly developed without the cartesian coordinate system? Or is this a necessary and therefore inevitable construct that would must be discovered "sooner or later"? - andy c. nguyen

Well, the first thing to say is that mathematics did develop for some time without the Cartesian co-ordinate system. And there are plenty of branches of mathematics where it isn't terribly important, for example, abstract algebra. It's also worth saying that there are lots of other co-ordinate systems, for example, polar co-ordinates. What is true is that Cartesian co-ordinates brought two powerful branches of mathematics, geometry and analysis, into a close relationship they had not previously enjoyed. I don't see any reason such a relationship would have had to be discovered at some point, but it is an extremely natural one.

Are there any contradictions of the Axiom of Choice (AOC) that are consistent with basic mathematical logic? Has anyone tried to develop a non-AOC theory?

The Axiom of Choice (usually denoted "AC") is a statement of set theory rather than of basic mathematical logic, so the theories of interest are versions of set theory that reject AC. As Dan said, any theory containing the Axiom of Determinacy will imply not-AC, but one can also simply look at what is possible without AC and, similarly, what cannot be proven without AC. There is a nice guide to such results: Thomas Jech's The Axiom of Choice . There are also weaker forms of AC, such as the Axiom of Countable Choice (every countable set of non-empty sets has a choice function) and the Axiom of Dependent Choice (more complicated). There are also forms that are stronger than what is usually assumed in set theory, in particular, what is sometimes called the Axiom of Global Choice.

Can we have a POSITIVE understanding of such concepts as infinity? What I mean is that, whilst I am sure that we can well grasp the concept of finiteness, can we do more than negate it (which would yield not-finiteness), can we understand infinity from the inside instead of by negating everything that lies outside of it? Thanks, Andrea Jasson

There are two kinds of definitions of infinity, one which is"negative" in your sense and one which is "positive". The former is,historically, the older. There are many equivalent definitions offinitude. My own favorite is due to Gottlob Frege: A set is finite ifits members can be ordered as what he called a "simple" series that hasan end, but it would take some time to explain what Frege meant by a"simple" series. What's nice about Frege's definition, though, is thatit amounts to a formal elaboration of the idea that a set is finite ifit can be counted . A definition close in spirit to Frege's isdue to Ernst Zermelo: A set is S finite if it can be "doublywell-ordered", that is, if there is a relation < on S with thefollowing properties: < is a linear order: for all x, y, and z in S, either x<y or x=y or y<x Everynon-empty subset of S has a <-minimal member; that is, if T ⊆ S andT is non-empty, then for some x∈T, for every other y∈T, x<y. Everynon-empty subset of S has a <-maximal...

It was said [in a Google groups post ] that "If a [mathematical] proof requires the checking of a very large but finite number of cases, far too many for a human to check, and we use a computer to perform that check, should we count the proposition as proved?" is an open question of mathematical philosophy. Why would anyone think the answer is anything but "yes"? The proof may not have desired aesthetic qualities, but no mathematician would deny its validity even though she may try to create a more pleasing proof.

I don't have anything to add to Dan's comments, which pretty much cover the bases. But I will add another reference: Tyler Burge, "Computer Proof, A priori Knowledge, and Other Minds", Philosophical Perspectives, vol. 12, pp. 1-37, 1998. Burge's discussion ties the question asked here to very general questions about the nature of a priori knowledge.

The numbers e, i and pi are related. Is this natural or a consequence of the way we do our mathematics? Iain Nicholson

I'm not sure what's meant by "natural" here. But the numbers e , i , and π are the numbers they are, and are related as they are, quite independently of how we choose to do mathematics, just as the stars are hot balls of fiery gasses whether or not anyone regards them as such. Whether Euler's Equation (see question 393 ) plays any important role in our mathematical theories is, on the other hand, a consequence of how we choose to formulate them, and so one might ask why we should choose to formulate mathematics as we do instead of some other way. But again, this question is no different, in principle, from the question why we should accept the astronomical theories we do. Mathematicians have, and can give, good reasons for formulating analysis in the way they do, and there are sometimes disagreements about how best to proceed. These disagreements get resolved (or not) on broadly mathematical grounds. One or another formulation leads to a fruitful way of conceiving the problem space; others do not...

Dan says that he wouldn't say that whether e i π = -1 isn't "'independent of how we choose to do mathematics' because it does depend on how we choose to define exponentiation for complex numbers". But to what does the emphasized "it" refer? Presumably, to the claim that e i π = -1. But we really should not say that whether e i π = -1 depends upon how we have chosen to define exponentiation. Whether " e i π = -1" expresses a truth depends upon how we define exponentiation, but whether e i π = -1 does not, just as whether "3+4=7" expresses a truth depends upon what "3", "4", "7", "+", and "=" mean, but whether 3+4=7 does not.

Does the equation "e to the power i x pi = -1" have any physical meaning? Is there a meaning waiting to be discovered?

For those who do not know (I had to look it's been a while!), this equation follows from a more general equation, known as Euler's Equation: e xi = cos( x ) + i sin x . Complex analysis is applied in many parts of physical science, and it would be surprising if such a fundamental relation did not have some physical interpretation in, say, fluid mechanics. But I don't know to what extent complex exponents are in play there. There is another question that arises simply within mathematics and that might have an interesting answer, namely: Is there some natural geometrical interpretation of Euler's equiation. I don't know the answer to that question, either.

What are the major open questions of mathematical philosophy? Of these, which are mathematically significant, if any? By "mathematically significant," I mean "would affect the way mathematicians work." For example, the question of whether mathematics is created or discovered has no impact on working mathematicians. On the other hand, studies into the foundations of Math were certainly mathematically significant, and although one could argue that that was more Math than Phil, we can give Phil some credit. But that question is now closed, as far as mathematicians are concerned.

First, I'm going to bristle. Logicians are mathematicians, even ifmost mathematics departments nowadays don't seem to want them. Work inlogic is often driven by profound philosophical concerns, and in thebest such work— Solomon Feferman 's would be an example—mathematics andphilosophy are so intertwined that it would be pointless to try todisentangle them. But I'll stop bristling now and assume that thequestion concerned how foundational work might affect non-foundationalwork. Onemajor question is to what extent incompleteness, of the sort thatinterested Gödel, is of serious (non-foundational) mathematicalconcern. It's been known for some time now that most of the centralresults of non-foundational mathematics can be proven in tiny fragmentsof Zermelo set theory, let alone Zermelo-Frankel set theory. Much workin set theory, on the other hand, concerns extensions of ZF that makethe first inaccessible cardinal—that being the smallest cardinal whoseexistence cannot be proved in ZFC (if it is...

Does infinity exist in reality?

My thesis supervisor, George Boolos, wrote his own dissertation on the analytic hierarchy, whose description I shall omit. Suffice it to say, for present purposes, that the sets it concerns are infinite, and there are infinitely many of them. At the end of his oral defense, one of his examiners said, "So tell me, Mr Boolos. What does the analytic hierarchy have to do with the real world?" George's response was: It's part of it. Perhaps you had in mind physical reality. Then the question is a scientific one. Space is not, according to current physical theory, infinite in extent. But physics, in its contemporary form, typically describes both space and time as continuous (though, as I understand it, there are or at least have been proposals to quantize space). If that is correct, then space is, in one sense, infinite, in so far as there are infinitely many points even in bounded regions of space.

Suppose we decide to let 'Steve' name the successor of the largest number anyone has ever thought about before next Tuesday. Can I now think about Steve? For example can I think (or even know) that Steve is greater than 2? If not, why not? If so, wouldn't that mean that some numbers are greater than themselves?

This question poses a version of Richard's paradox. (That's French: RiSHARD.) It's clear that not every number can be named using an expression of English that contains fewer than twenty-five syllables. There are only finitely many such expressions, after all. So there are some numbers that are not namable using fewer than twenty-five syllables, and it therefore follows from the least number principle (which is equivalent, under weak assumptions, to mathematical induction) that there is a least number that is not namable using fewer than twenty-five syllables. But now consider the phrase "the least number not namable using fewer than twenty-five syllables". It has twenty-four syllables, and so it would seem that the least number not namable using fewer than twenty-five syllables can be named using only twenty-four. Contradiction. What Richard's paradox shows is that the notion of namability or definability needs to be treated with great care. A great deal of interesting mathematics was done in...

As a teacher of high school mathematics and a former student of philosophy, I try to merge the two to engage my students in meaningful conversations about the significance of some mathematical properties. Recently, however, I could not adequately defend the statement "a=a" as being necessary for our study of geometry when one student challenged "When is a never NOT equal to a?" What would you tell them? (One student did offer the defense that "Well, if we said a=2 and a=5 then a=a would be false, causing problems.")

Identity is an important notion in mathematics. There certainly areexamples of geometrical theorems that demonstrate identities, some ofthem very important. Consider, for example, the (Euclidean) theoremthat the three lines from the vertices of a triangle bisecting theopposite sides meet in a single point. Any time one uses a word like"single", identity is in play. Frege gives a similar example in section8 of Begriffsschrift to illustrate the same point:Mathematical identities can have substantial content. Elementaryarithmetic may be a better illustration of how important identitiesare, though, since basic arithmetical facts are all equalities, thatis, identities. That said, the question arises how identity is to be characterized.There are various ways to proceed. But any characterization is going tohave to deliver four basic properties of identity: reflexivity (thatis, a=a), symmetry (that is, a=b → b=a), transitivity (a=b & b=c →a=c), and substitutivity, which says (roughly) that if you have …a...