Is it possible for a mathematical equation to both be fundamentally unsolvable and also have a correct answer?

I hope philosophers of math on the Panel will respond with more authority than I have. My understanding is that Gödel showed that arithmetic contains pairs of mutually contradictory statements neither one of which is provable within arithmetic. Assuming the standard logical law that exactly one of every pair of mutually contradictory statements is true, we get the result that some arithmetical truths are unprovable within arithmetic. I can't say whether those truths include statements to the effect that such-and-such is the solution to an equation, but if they do, and if their being unprovable within arithmetic makes the associated equations "fundamentally unsolvable," then the answer to your question is yes. Someone might reply that an unprovable arithmetical statement can't be true, but I think that would be to mistake truth for provability.

To answer this question properly, we would need to make some of the terms used in the question more precise. Math only works with precise definitions. But there is a natural way to do this, and it does bring us close to Gödel's work.

A diophantine equation is any equation of the form:

f(x,y,z...,w) = 0

where f is a polynomial (i.e., something like x3 + 3x2y2 + 4xy3) and the question is: Is there an integral solution to the equation ? I.e., a way of assigning integers (positive or negative whole numbers, or zero) to x, y, z, ..., and w so that the equation comes out true? One very famous such equation is:

x7 + y7 = z7

This is what people call "Fermat's Theorem for 7". We now know that it has, indeed, no integral solutions, and the same goes for any other prime exponent except 2.

Diophantine equations crop up all over mathematics. So, in a famous lecture in 1900, the great mathematician David Hilbert posed the question: Is there some general way of telling, given an arbitrary diophantine equation, whether it is solvable? This became known as Hilbert's Tenth Problem, because it was tenth on a list of twenty-three such problems that Hilbert mentioned in his lecture. Well, the answer turns out to be no. This was proven in 1970 by the Russian mathematician Yuri Matiyasevich, building on work by Julia Robinson, Hilary Putnam, Martin Davis, and others. For reasons I won't be able to explain here, this implies (here's the connection with Gödel) that no intelligible (i.e., "formal") system of mathematics, no matter how many axioms and special rules it might have, can, for every Diophantine equation, prove either (a) that it has a solution or (b) that it does not have a solution. I.e., there will always be a Diophantine equation such that the system in question cannot tell us whether it has a solution or not.

Note, however, that, if there is a solution, then there is always going to be a way of showing that there is one: One just has to exhibit the solution, and of course one can always calculate whether the solution is a solution, using techniques we all learned in basic algebra. For example, it's easy to tell whether x5 + y5 = z5, for some specific x, y, and z. And, indeed, if there is a solution, one can always, in principle, find it: You just have to work through all the possible solutions. There are infinitely many of these, but you can do an orderly search through them, and check each one. So, if there is a solution, you will eventually find it. Of course, we might die before we find it, or the universe might experience heat death, or who knows what, but such practical matters are irrelevant here. So, better: If there is a solution, it can, in principle, be found in a finite amount of time. What Matiyasevich showed, by contrast, is that, if there isn't a solution, you can't always tell that. (You can't, of course, work through all the possibilities to see that none of them work.)

This point extends to more complicated sorts of equations: If we're talking about equations involving integers (or even rationals), and if it's possible to check whether a proposed solution is a solution, then (a) if there is a solution, it will always be possible to exhibit it and (b) it will, in principle, be possible to find it.

So, as long as we're talking about these sorts of equations, then, the answer to the question asked is: No, there can't be an equation that has a correct answer but that we can't show to have a correct answer. But it is possible for there to be an equation that doesn't have a correct answer and yet for us to be unable to prove that it has no correct answer, and even for it to be impossible for us to prove it has no correct answer, using anything like currently accepted mathematical techniques. Moreover, given any circumscribed set of mathematical methods (e.g., those formalizable in so-called Zermelo-Frankel set theory, or any of the stronger systems people have studied), there will always be such equations: ones that are not solvable, but that cannot be shown to be unsolvable using those methods.

If we worry instead about equations involving real numbers, then everything becomes more complicated, and I suspect the answer changes. But this is just because real numbers (like pi, say) are infinite objects, and even such simple operations as squaring are all but impossible actually to carry out.

Good question!

Read another response by Stephen Maitzen, Richard Heck
Read another response about Mathematics