David Hilbert was good at formal logic. He wondered about many things.
He challenged mathematicians of his day with many problems.
Thinking there might be a way to algorithmically determine the truth or falsity of any statement in an axiomatic system, he posed the Entscheidungsproblem.
Philip Wadler tells the story:
Computation has limits. Mathematicians and programmers will never go out of business. Computer Science will always be an interesting field. Let’s dig deeper into this field of computation, and in particular, study what it means to be computable.
What do we mean by “computation”? This definition will do: a formal, mechanical, step-by-step procedure for transforming input information into output information, such that the computation itself can be described finitely and that each step in the computation takes a finite amount of time.
There are many ways to capture this idea formally. We learned from Wadler that all three of these:
were equivalent in terms of what exactly they could compute. It turns out that a bunch of other efforts to formalize computation were also computationally equivalent:
Because all of these formalisms are equivalent, and no one has found anything better (other than magic oracles), most folks hypothesize that we really have formally captured our intuitive notion of computability with these things. The Church-Turing Thesis states that what we intuitively understand by “computation” is captured by the Turing Machine (and its brethren).
In retrospect, one wonders why Hilbert suspected that every mathematical statement (i.e., a statement of a “formal axiomatic systems capable of representing arithmetic and beyond”) was decidable! And thus, whether all true statements had proofs, or why there should exist a finite algorithm that could compute all functions in $\mathbb{N} \rightarrow \mathbb{N}$ (or even in $\mathbb{N} \rightarrow \mathbb{B}$).
After all, a simple diagonalization proof tells you there are non-computable functions in $\mathbb{N} \rightarrow \mathbb{N}$:
Proof: Programs are finite strings of symbols so there are a countably infinite number of them. If all functions were computable, then there would need to be a program for each one, so there would need to be a countably infinite enumeration of them. Assume we could enumerate every function from $\mathbb{N} \rightarrow \mathbb{N}$ as $f_0, f_1, f_2, \ldots$. But then the function $f$ such that $f(n) = f_n(n)+1$. would not be on the list. So this list, and in fact any such list, is incomplete.
We’re going to write out this diagonalization on the board.
Okay nice, so there are more functions than programs. Diagonalization only tells us uncomputable functions exist, but it doesn’t tell us which ones.
Are there any realistic functions we can specify that we cannot compute ($\mathbb{N} \rightarrow \mathbb{N}$) or cannot decide ($\mathbb{N} \rightarrow \mathbb{B}$)?
Do there exist problems for which all instances have a YES-NO answer but no computer, no matter how powerful, nor any algorithm, in any formalism whatsoever, can compute the answer?
Well yes, by the proof we just saw. But are there any that we can describe and actually understand? Yes, and here’s the classic first example. It is known as The Halting Problem:
Write a function called halts, which accepts a function f and an input x, and returns True if f(x) halts, and False if f(x) enters an infinite loop when run.
Sounds simple enough. Let’s assume we can write it:
def halts(f, x): # TODO: code to decide the issue and return True or False
But here’s the problem. Let’s consider this, perfectly normal function:
def fox(f): if halts(f, f): while True: pass
Now consider the call fox(fox).
The call fox(fox) expands to: "if halts(fox, fox) then go into an infinite loop, otherwise halt.”
Wait. Ugh. Do you see.... OMG. This cannot be!
The call runs forever exactly when the halts function says it should stop, and stops exactly when the halts function says it should run forever.
We are forced to conclude that our assumption that halts could be written must be incorrect. We cannot write halts in any programming language.
The halting problem is undecidable.
Let’s make a formal definition of decidable and undecidable. We are working in theory-land, so we want to be simple and precise. Here’s a definition: A language is decidable if there is a Turing Machine that accepts (answers YES) for all strings in the language and rejects (answers NO) for all strings not in the language. A language is undecidable if no such Turing Machine exists.
So decidability is about languages. But you knew that because we defined what a decider was in the notes on Automata Theory earlier. So when we say a “computational problem is undecidable,” we mean the language of all machines that compute that problem is undecidable. For the halting problem, what we are really saying is the langauge: $$H = \{ \langle M \rangle , w \mid M \textrm{ halts on } w \}$$ (where, as you may recall $\langle M \rangle$ is the string encoding of machine $M$) is undecidable.
Recall that a language is recognizable if there is a Turing Machine accepts (answers YES) for all and only those strings in the language. A language is decidable if there is a Turing Machine that accepts (answers YES) for all strings in the language and rejects (answers NO) for all strings not in the language. A recognizer may either, for strings not in the language, loop or answer NO. A decider must always halt with a YES or NO answer. All deciders are recognizers but not vice-versa.
One of the more interesting things to come out of computability theory is that there are some languages that Turing Machines can recognize, but not decide. The language of all machine+input pairs that halt is one such language. That is, even though you can’t decide whether a program will halt, you can recognize whether a program will halt:
def halts(f, x): f(x) return True
So the language of halting programs is recognizable but not decidable.
The formal (mathematical) argument goes like this: If you can recognize a language, you can enumerate all its utterances. For concreteness, let’s see how this is done with the Halting Problem. Since all programs are strings we can enumerate a list of all programs $p_0, p_1, p_2, p_3, \ldots$. We can do this:
...and so on. Whenever a $p_i$ halts, add it to your “output list” of halting programs.
This process enumerates all and only the halting programs.
We have the following fact:
The set of all languages that are Turing-Recognizable is known as RE, namely, the set of Recursively Enumerable Languages.
and also:
The set of all languages that are Turing-Decidable is known as R, namely, the set of Recursive Languages.
The fact that $R$ is a proper subset of $RE$ surprised some people. It is saying that languages—and therefore some decision problems— for which we can definitely compute a YES answer...but when the answer is NO we might...never...know.
🤯
Now what about the language of all strings that encode Turing Machines that never halt (sometimes called $\bar{H}$)? That language isn’t even recognizable. This will need a proof, but we’ll get there soon.
So, how do we know a language isn’t recognizable?
We can find a first non-recognizable language using diagonalization. The language is called $\mathcal{X}$ and is defined by:
$$ \mathcal{X} = \{ \langle M \rangle\mid \langle M\rangle \not\in L(M) \} $$This language is the set of all machine descriptions for which the machine description is not in the language recognized by the machine. Informally, it is the set of all machines that don’t recognize themselves. To show this language is not recognizable by any Turing Machine let’s make a list of all the Turing Machines there can possibly exist. There are a countably infinite number of these. We will call them $M_0, M_1, M_2, \ldots$. For each machine we will mark exactly which machine encodings are in the language or not:
| $\langle M_0 \rangle$ | $\langle M_1 \rangle$ | $\langle M_2 \rangle$ | $\langle M_3 \rangle$ | ... | |
|---|---|---|---|---|---|
| $M_0$ | T | F | F | T | ... |
| $M_1$ | T | T | T | T | ... |
| $M_2$ | F | F | F | T | ... |
| $M_3$ | T | T | F | T | ... |
| ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋱ |
The T’s in each row comprise the language recognized by $M_i$. The language $\mathcal{X}$ is the flipped diagonal of this table, which cannot appear in this table. No matter how we try to enumerate all the Turing Machines, there can never be a row which makes $\mathcal{X}$. So $\mathcal{X}$ cannot be recognized by a Turing Machine.
So the Halting Problem is undecidable. But how do we prove other problems undecidable? One way is to prove: “If problem $P$ were decidable, then the Halting Problem would also be decidable.”
This is done by reducing the Halting Problem to problem $P$.
In general, a reduction of problem $A$ to problem $B$ means we use an algorithm for $B$ to solve $A$. You’ve probably done that a lot in the past, right? You've written “utility functions” and then called them to do a larger task. In the Wikipedia article on reduction, they show how to reduce multiplication to squaring—meaning we can use squaring (and addition) to solve any multiplication problem. Or “in order to multiply, I just need to know how to add and square.”
Let’s to this for the problem of determining whether $L(M) = \varnothing$. In formal notation, we want to know if we can decide:
$$\mathcal{E} = \{ \langle M \rangle \mid L(M) = \varnothing \}$$This is the empty language problem. We can reduce the halting problem to it. Let’s do it. Suppose $\mathcal{E}$ were decidable. This would mean we would have a machine $M_{\mathcal{E}}$ that when given the description of a machine $M$ as input, would always halt with YES when $L(M) = \varnothing$ and halt with NO otherwise.
Now, let’s make a machine called $H$ that (1) takes two inputs, $\langle M\rangle$, the description of a Turing Machine, and $w$ a string, and produces the description of a machine that erases its input, writes $w$ on it, and simulates $M$, then (2) passes that new machine to $M_{\mathcal{E}}$, then (3) swaps the outputs of $M_{\mathcal{E}}$. Machine $H$ decides the halting problem. But this cannot be. The only place we could have gone wrong was our assumption that a decider for $M_{\mathcal{E}}$ exists. So $\mathcal{E}$ is undecidable.
A diagram might help:

Here $L(M') = \varnothing$ iff $M$ does not halt on $w$, so the dashed box is a halting problem decider, which should not exist. So the Empty? machine cannot be a decider.
See Wikipedia’s article on reducing problems and Wikipedia’s list of undecidable problems.
Make sure you understand the approachTo show decision problem $P$ is undecidable, reduce the halting problem (or any other known undecidable problem) to $P$.
If you can do that, this means that if $P$ were decidable, the halting problem would be, too, but we know it can’t, so our assumption was wrong and $P$ must be undecidable.
Here is something amazing.
Let’s say you were given a general YES-NO problem about the run-time behavior of programs (e.g. does the program halt? or does this program return true for any input?). Rice’s Theorem says that the problem is undecidable unless the property holds for all programs or holds for none.
More technically:
All nontrivial semantic properties of programs are undecidable.
Here semantic means run-time and trivial means holds for all or holds for none.
We can use Rice’s Theorem to show, pretty much immediately, that the following languages are undecidable:
Remember you can read many of the above languages as properties of programs. You can also apply the theorem to interesting properties an prove that there is no algorithm to definitively answer any of the following questions:
In real life, we have tools, called static analyzers, that examine program source code and try to predict errors and security vulnerabilities. By Rice’s theorem, we know that these tools can only estimate answers to most questions, either overfitting or underfitting, giving some false positives and false negatives, or answering “it might” or “I don’t know”.
It is interesting how many reasonable questions about programs are undecidable. Static analysis is an art form.
What Rice’s Theorem Does Not Apply ToRemember that Rice’s Theorem applies only to nontrivial semantic properties of programs. It does NOT apply to trivial properties such as “is $L(M)$ recursively enumerable?”, nor to syntactic properties such as “Does this program have more than 10 lines of code?” or “Does this program contain the keyword
for?” Ditto for Turing Machines: it does not apply to questions such as “Does this TM have more than 10 states?”
Read more about Rice’s Theorem at Wikipedia.
Time to talk about beavers.
Real beavers?
AI-generated beavers?

An $n$-state busy beaver is a deterministic $n$-state halting Turing Machine with $\Sigma = \{ 1 \}$ and $\Gamma = \{0, 1\}$ (where 0 is the blank) that ... okay, well there are at least three different kinds of busy beavers ... that, either:
on an initially blank tape over the set of all such deterministic $n$-state halting Turing Machines.
Why on Earth would be care about such things?
They tell us something about computability and complexity. It turns out that the function producing the $n$th busy beaver is uncomputable because it grows faster than any computable function. 🤯🤯🤯
Let’s define a Busy Beaver function. Previously, we saw that the number of deterministic $n$-state Turing Machines with a 2-symbol alphabet is either $(4n+1)^{2n}$ (for the halt-as-action variety) or $(4(n+1))^{2n}$ (for the halt-state variety). Hey, either way, that’s a lot of Turing Machines! For fun, here are the first few values:
| $n$ | $(4n+1)^{2n}$ | $(4(n+1))^{2n}$ |
|---|
So to find $BB(n)$ for a given $n$, we first note that some of these machines will halt, and some will not. Now we can go through all the halting ones (hahahahaha, oh my) and count the number of 1s they print, or the number of steps they take. The maximum number of printed 1s we find is $\Sigma(n)$, and the maximum number of steps taken is $S(n)$. These days, people care mostly about $S$, which they now call $BB$, so $S(n) = BB(n)$. Let’s see what these values are:
| $n$ | $\Sigma(n)$ | $BB(n)$ |
|---|---|---|
| 0 | 0 | 0 |
| 1 | 1 | 1 |
| 2 | 4 | 6 |
| 3 | 6 | 21 |
| 4 | 13 | 107 |
| 5 | 4098 | 47176870 |
| 6 | $> 2 \uparrow\uparrow (2 \uparrow\uparrow (2 \uparrow\uparrow 10))$ | $> 2 \uparrow\uparrow (2 \uparrow\uparrow (2 \uparrow\uparrow 10))$ |
Woah.
It seems amazing that we actually know what $BB(5)$ even is! Knowing this means that for each of the tens of trillions of possible 5-state, 2-symbol Turing Machines, someone figured out whether it halts or not, and if it halts, how many steps it took, and what it outputs. Many thousands of people contributed to the decades-long effort for find it.
That’s a lot of machines to go through. And it’s not easy to figure out whether a long-running machine will eventually halt or not. Such a machine might take years or centuries to finish, so we can’t just run it and see. In fact, there is no way to compute, in general, whether an arbitrary machine will even halt. We have to painstakingly analyze each machine, and we have to sometimes be really clever. And even if we know a long running machine will halt, we need a clever way to figure out its step count (or printed-1s count), because this number might be too big to actually store. At some point the numbers far exceed our human ability to currently even describe. A lot of ingenuity is required. People that try to find these values are called Busy Beaver Hunters.
Here are some busy beavers (from Pascal Michel). The notation below comes from bbchallenge, the Busy Beaver Challenge site. Here 0=blank.
There are thousands of great resources to study. Here is a small sampling. Note that some of these exclusively use the halt-state variety and some exclusively use the halt-action variety and some mix them all up. Try not to get too lost!
Busy Beaver Functions are not ComputableIt turns out that $\Sigma$ and $BB$ are not computable functions. Although they each have a well-defined finite description, no computing device can compute them. Here's something really cool: They grow faster than any computable function.
Imagine that.
So far, we have been concerned with computable functions and recognizable languages (which are in some sense the same thing). We saw some noncomputable functions and some nonrecognizable languages. But interestingly, did you catch what Turing’s amazing paper was called?
On Computable Numbers, with an Application to the Entscheidungsproblem
Computable numbers. That’s where Turing started. (You really should read his paper, or better, read Petzold’s book The Annotated Turing which includes the whole paper plus copious annotations.)
A number is computable if there is a mechanical procedure (i.e., an algorithm) that can produce its digits to any desired level of accuracy. Every rational number is computable. Even some irrational numbers are computable, like $\pi$ and $e$, because we can algorithmically generate their digits.
So where do computable numbers fit in? For these notes, we’ll look only at one-dimensional numbers. You can generalize the discussion if needed. We have another one of those proper containment hierarchies:
Now let’s enter the realm of magic. Not illusion. Not sleight of hand. Magic.
Loosely speaking, an oracle (a.k.a. a magic oracle) is something you can add to a machine, that, at any time during a computation, you can simply “ask the oracle” a question whose answer might not even be computable. It just magically answers the question and you go about your business.
For example, you might have an oracle that can answer the Halting Problem. You could build a Turing Machine with such an oracle, and whenever you need to know whether some machine halts on some input, you just ask the oracle. The oracle tells you YES or NO, and you continue your computation. Such an Oracle Machine can decide the halting problem and other classically undecidable problems. But you know what? An Oracle Machine like that cannot decide whether Oracle Machine like it will halt.
Computability is fun. Something here transcends even magic.
Read about oracles at Wikipedia.
Computability Theory deals with what can and cannot be computed on a particular computing model. It does not make any claims on the number of steps required, or the amount of space required, to do the computation. That is what complexity theory is for.
But maybe we shouldn’t try to separate computability from complexity. These are convenient labels, but they are very interrelated. Gabe Robins’s diagram that we saw when discussing automata theory intentionally mixes complexity classes in with the standard computability classes. Many schools offer a course called Computability and Complexity. Erik Demaine at MIT gives a brief overview of the limits of computation within his complexity theory lecture in his 6.006 class:
Computability Theory is a fascinating subject.
You might also like:
When studying Robins’s notes, focus on the set entitled “Formal languages, automata models, grammars, decidability, and Rice's theorem” (PDF here). There are several examples of problem reductions and of applications of Rice’s Theorem.
Here are some questions useful for your spaced repetition learning. Many of the answers are not found on this page. Some will have popped up in lecture. Others will require you to do your own research.
We’ve covered: