The Limits of Logic: A Casual Introduction
I don't want you to have a naive trust in human reason. Let us take a journey into modern mathematics to see that even the most rigorous accomplishments of reason may not quite be "the Truth."
I wrote a previous post about the philosophical necessity of doubt. As Kant critiques, even pure reason alone has limits.
Now, gun-to-head, the average person would say that mathematical facts are among the most trustworthy. 2 + 2 = 4 is about as sure as it gets. Even Kant agrees that these are synthetic a priori truths as relates to human experience.
But there’s that little caveat at the end: “as relates to human experience.” It’s true for us, it’s true for phenomena, but we ought not say it is true period, or true for noumena, or anything like that.
This is a post for general audiences. I don’t mean to scare you away with math. I mean to expose you to certain mathematical ideas—not in order to explore their technical consequences, but to illustrate a certain viewpoint of the philosophy of mathematics.
The first barrier to saying that there is a capital-T mathematical Truth is that there is not one mathematical truth but many.
One of the most famous narratives in mathematical history is that of Euclid’s Fifth Postulate. Euclid, the Ancient Greek geometer, wrote the Elements to set forth the principles of geometry. He assumes various axioms and postulates in order to deduce necessary theorems and corollaries. But for millennia after his work, many mathematicians believed that his fifth postulate was unnecessary and could be proven from the others. It seemed so simple: if two lines are intersected by a third and the resulting angles are not both right angles, then the original two lines must intersect. An alternate statement of this property in the context of Euclid’s other axioms is better-known due to Playfair: Given a line and a point not on the line, there exists one line passing through the point parallel to the other.
Simple enough. But for thousands of years people tried to prove this fact and failed. They failed for the simple reason that it’s not true.
Well, it’s not True, anyway.
The statement is true for any lines you might draw on a flat sheet of paper, what we call the Euclidean plane. What mathematicians didn’t fully grasp for centuries was that the Euclidean plane is not the only space that satisfies Euclid’s other axioms of geometry. We say that there are multiple models of Euclidean geometry minus the fifth postulate.
Indeed, plane geometry is the simplest, but non-Euclidean geometries exist which satisfy all of Euclid’s rules except the fifth postulate. In spherical or hyperbolic geometries the notions of line and angle differ somewhat from what you may be used to; in spherical geometry, for instance, a line is basically a great circle around the center point of a sphere; no lines are parallel, and triangles have interior angle sums greater than 180 degrees.
In general, then, you cannot say that Euclid’s Fifth Postulate is true or false. Sometimes it’s one, sometimes it’s the other, it depends what space you’re working in. Naturally, this is true of all sorts of mathematical postulates; the group axioms are satisfied by, well, groups, but not by other algebraic structures, etc.
There is a natural counterargument to this:
Perhaps Euclid’s Fifth Postulate itself is not capital-T True, since there are models of geometry where it does not hold. We shouldn’t say that any given axioms are True. But we can still say that implications of the schema Axioms ⇒ Theorems hold regardless of whether we’re in a space where the axioms themselves do. These implications are the mathematical Truths.
This view is certainly an improvement over the naive view that certain axioms are themselves true. We’ll see how well it holds up in the light of other considerations later.
The second reason to lose faith in mathematical Truth is that math lacks confidence in itself.
Non-students of math have probably never heard of Gödel and I’m really not sure whether that’s a shame or whether it’s for the best. More than anyone else he proved the limits of what math can do.
To set the stage—the esteemed mathematician Hilbert in 1900 set out a list of twenty-three problems in the field that he would like to see solved. The second of these was to prove that basic arithmetic is consistent—meaning it has no contradictions. From the principle of explosion even one contradiction in arithmetic would mean that saying 2 + 2 = 5 is just as correct as saying 2 + 2 = 4; this would simply be too horrible to be true. It is thus already assumed that arithmetic is consistent, and Hilbert is just asking someone to come up with a clever way to prove it.
What could go wrong?
Enter Gödel a few decades later with his famous incompleteness theorems. Working within the system of basic arithmetic he uses a clever trick, encoding formulas of arithmetic (like a + b = b + a) as numbers (sort of like how your computer encodes videos or pictures as binary numbers). Now he can use the laws of arithmetic to handle mathematical formulas too and arithmetically examine formulaic questions (like whether a + b = b + a ⇒ a + b + c = c + b + a). His next clever trick is to invent a particular self-referential formula, which roughly represents the statement “this formula cannot be proven from the arithmetic axioms.” Well, if you could prove this statement, the statement would be false, and you proved something false; this would mean arithmetic is inconsistent. And if you can’t prove it, that means there are facts about arithmetic that arithmetic can neither prove nor disprove; it is incomplete.
This is Gödel’s first incompleteness theorem. Any formal theory at least as complicated as basic arithmetic is either inconsistent or incomplete.
This hits like a punch to the gut. Either you’re doing math in an inconsistent theory, where thanks to the principle of explosion everything is both true and false simultaneously and everything is totally worthless—or you’re in an incomplete theory, and there are true statements that you can never, ever prove, even in principle.
It gets a little bit worse, too. Maybe there will always be some inaccessible truths in arithmetic but we can still use its tools to prove itself consistent. We would have a theory missing some facts but provably reliable.
Gödel’s second incompleteness theorem kills this notion: it states that any consistent theory at least as powerful as arithmetic can never prove its own consistency. The first theorem said there are some things you can never prove, while the second says that the one thing you’d really like to prove is one of those that you never can.
For these reasons I said at the start of this section that math lacks self-confidence—within any given theory, the great fact of the theory’s consistency, the fact that it even makes sense to work in that theory, cannot be proven. There’s uncertainty and doubt.
I skipped over a significant fact in the previous discussion, which you may have picked up on by my insistence on repeating that Gödelian limitations exist only within any given theory.
You can, in fact, prove basic arithmetic (which here we’ll call Peano arithmetic) to be consistent. The incompleteness theorems merely tell us that Peano arithmetic is incomplete and unable to prove its own consistency. But there’s no reason that we can’t prove that the Peano axioms are consistent within another, stronger theory, and in fact this can be done and has been done. ZFC, the axioms of set theory that define the mathematical universe most mathematicians live in most of the time, is sufficient to do this.
But this doesn’t resolve the problem. ZFC cannot prove its own consistency. You can add another axiom to ZFC saying that “ZFC’s other axioms are consistent” (a statement we’ll call Con(ZFC)). Then ZFC+Con(ZFC) proves ZFC’s consistency but not its own; you could add another axiom Con(ZFC+Con(ZFC)) if you’d like and continue in this way ad infinitum, creating an endless hierarchy of stronger and stronger theories, but to what end?1 You’re not gaining any ultimate certainty about the consistency of your uppermost theory.
Unfortunately, it’s turtles all the way down.
A third reason for skepticism is that math has to outsource its own epistemology.
Tarski’s undefinability theorem is another scary result, the cousin to Gödel’s theorems. It say that, working in a given theory, you can’t build a formula within that theory that delineates all true statements from false statements. As before, in order to have a set of all true statements you have to jump up from the theory itself to the metatheory in which it resides. When you interpret a theory’s statements, then, you’re not operating within the theory—you’re operating within human “common sense.”
From the second incompleteness theorem we learned that any given mathematical theory can’t prove itself; now we learn that it can’t even interpret itself, in a sense. This is a Kierkegaard blog and we’re existentialists here, so the philosophical analogy is obvious to us: meaning doesn’t come for free, it’s not an automatic objective fact of the universe, but rather meaning is subjectively determined and we have Kierkegaard’s wonderful formula “subjectivity is truth.” You see the parallel between this and Tarski’s theorem where the meaning of mathematical truth doesn’t come for free with its theory, I’m sure. There remains a necessary interpretive step.
The Platonic view of mathematics is that “2” and “+” and “the Lebesgue integral” and such things are, in some spiritual sense, real. I argue that if this is true, the realm of pure forms must be a truly hideous place—too hideous to be real. Mathematics as foundational, self-evident Truth is a horrid and repellent notion to my mind for some of the reasons I’ve laid out above. It is far too limited to be “the Truth.”
I like the existential philosophy. The speculative, the idealist, the Platonic—these are all too much mumbo-jumbo for me to believe. It never ceases to amaze me that there are committed atheist Platonists, people who staunchly deny the existence of a god but sincerely believe in the actual existence of “two” and “multiplication.” What faith it requires to believe in numbers!2 All my faith is used up just believing in God; there’s none left over for abstractions.
But that’s besides my point in this essay. My point is that purely rigorous mathematics is not self-evident Truth. It requires unvalidated assumptions. I mentioned earlier the viewpoint that
We shouldn’t say that any given axioms are True. But we can still say that implications of the schema Axioms ⇒ Theorems hold regardless of whether we’re in a space where the axioms themselves do. These implications are the mathematical Truths.
However, by now you ought to realize that even these implications reside within a theory whose consistency (by Gödel’s second incompleteness theorem) must be taken basically on faith,3 and whose meaning (by Tarski’s undefinability theorem) is not inherent to itself but must be interpreted. Thus even in the coldest, purest mathematics we see the shadows of Kierkegaardian subjectivity.
This is, I think, a heartwarming conclusion. I feel to repeat that wonderful line of Kant’s: “I must, therefore, abolish knowledge, to make room for belief.” Mathematics is wonderful and a great accomplishment of human reason; I love it even if I studied it no further than undergraduate complex analysis. But it’s not the ultimate secret of existence, accessible only to those endowed with intellectual ability and privileged with the chance to study it. It is the peak accomplishment of human reason. But reason is not the ultimate grounds on which humans engage with existence; these are belief, subjectivity, interpretation, and relation, and these are accessible to all.
A technical question I’m not sure about—you can continue to add +Con(Theory) axioms to a metatheory hierarchy indefinitely at first glance, but it also seems plausible to me that this process will yield an inconsistent theory after a finite number of iterations. Of course it depends on what theory you start with; I’m sure there are some theories that could be extended indefinitely in this way and some that cannot, but I’m not sure which bucket Peano arithmetic or ZFC fall under.
The “real numbers” in particular, once you learn anything about them, prove to be the most unbelievable and unreal things out there.
Or by proving that theory’s consistency in a stronger theory, which is equivalent to kicking the can down the road and doesn’t fix the problem.
"Twice two makes four is a pert coxcomb that stands with arms akimbo barring your path and spitting" — Big D.
Great post, and great publication. Though gotta say, I'm actually pretty inclined to a reading of Kierkegaard (esp. the postscript) on which he rejects the very idea of limits to logic. In doing so, he tries to get us to reject the idea (rhetorically, through irony) that getting in right relationship to one's religious life turns on acquiring a certain bit of difficult-to-obtain knowledge (gnosis).
There's an excellent essay on this reading by Jim Conant from U Chicago's philosophy department. He compares the view to his "resolute" reading of Wittgenstein. It's nicely written, would commend to anyone.
https://static.hum.uchicago.edu//philosophy/conant/k%20w%20and%20nonsense.pdf