(updated at end)

I was reading about the halting problem recently, and in trying to understand it, I thought of a question that I can’t figure out the answer to (it is one of these cases where it feels like I am thinking myself into a knot). I am hoping one of my readers can answer the question for me. First, a little background on the halting problem:

The halting problem is the problem of determining whether, for a particular computer program (X), and a particular input to that computer program (Y), the program will ever “halt” (finish computing) or, alternatively, whether it will run forever (an example of a computer program that runs forever is: (while(TRUE) print “still going”).

(In all of the below, “halt” and “stop” mean the same thing: just that the computer program finishes computing in finite time.)

Alan Turing proved that the halting problem is not computable, i.e. that no computer program can take, as inputs, program X and input (to program X) Y, and determine whether X will halt.

Wikipedia summarizes the proof here, and Craig Kaplan summarizes it here.

Here is my quick summary (I am mostly using Craig’s line of reasoning, although Wikipedia’s is equivalent):

*******

Suppose you make a program called A, which takes program X and input Y, and which solves the halting problem, i.e. which tells you whether X would stop with an input of Y (it outputs TRUE if yes, FALSE if no). Note that X and Y are input to A as strings, and that X can take any string as an input. Construct a new program B, which takes one input, X, and is defined as: B(X)=A(X,X). That means that B tells you whether the program X would terminate if the input to X were the text of the program X itself (since X can take any string as an input, it can take the string representing itself as an input).

Now make a program C which takes 1 input, X, and is designed as follows:

if B(X) = TRUE then C just loops forever; if B(X) = FALSE then C outputs TRUE and halts.

The contradiction is as follows: Suppose you run C(C). If C(C) runs forever, the interpretation is that B(C)=TRUE, which means that A(C,C) = TRUE, which means that C, if run on itself, would stop. That sentence can be shortened into “If C(C) runs forever, C(C) would stop.” Conversely, if C(C) gives TRUE and halts, the interpretation is that B(C)= FALSE, which means that A(C,C) = FALSE, which means that C, if run on itself, would not stop. That sentence can be shortened into “if C(C) halts, then C(C) would not halt.” Notice that C(C) can neither run forever, nor stop, without a contradiction. Thus there can be no program A that gives TRUE if X halts with input Y, and gives FALSE if it does not halt.

*******

Suppose we tweak the programs A, B, and C above, and defined slightly different programs A’, B’, and C’. Program A’ is defined **not** such that it gives TRUE if X halts on Y and FALSE otherwise, but such that it gives TRUE if X halts on Y **within 5 seconds **and gives FALSE if X either a) halts after more than 5 seconds or b) never halts.

Note that, unlike the original halting problem solver A, we actually can create this program A’ – it would simply be coded to actually run program X with Y as an input and wait 5 seconds. If X halted within that timeframe, A’ should output TRUE, if not, FALSE. (If you tried to create A to simply run X with input Y, you could potentially wait forever, but you’d never know that you’d be waiting forever, so it doesn’t work.)

**Given this tweaked version of A, A’, what would be the interpretation of C’, which is defined to be an analogy to C?**

In order to get B’ and C’, we just tweak the above discussion slightly so that B’ and C’ are defined in terms of A’, which tells you whether the program “halts within 5 seconds” rather than whether it “halts” (ever): B'(X) (= A'(X,X)) gives TRUE if X halts on itself within 5 seconds, and FALSE otherwise. C'(X) runs forever if B'(X) gives TRUE, and outputs TRUE and halts (though not necessarily in 5 seconds) if B'(X) gives FALSE. Then, consider 3 possibilities: 1) If C'(C’) runs forever, the interpretation is that B'(C’) gives TRUE, which means that C’, if run on itself, would halt within 5 seconds (a contradiction). 2) If C'(C’) stops within 5 seconds, the interpretation is that B'(C’) gives FALSE, which means that C’, if run on itself, would not halt within 5 seconds. 3) If C'(C’) stops, but after 5 seconds, the interpretation is also that B'(C’) gives FALSE (C’ is not guaranteed to stop within 5 seconds if B’ gives FALSE), and thus C’, if run on itself, would not stop within 5 seconds (not a contradiction!).

**Note that the only non-contradiction is possibility 3. Does that mean that C'(C’) must halt, but after 5 seconds elapse?**

That would seem like an odd constraint for a computer program – how could we know that a computer program with a given specification should definitely halt, but not in less than 5 seconds? As we get faster and faster computers, 5 seconds gets you more and more, and it simply can’t be that we could say now that a given computer program, which halts, will never halt in < 5 seconds.

There seems to be a flaw in my reasoning. Where is it?

UPDATE (March 31): My friend Mark Mixer has pointed out my point of confusion. In order to allow my readers the pleasure of figuring out where I went wrong for themselves, and in order not to embarrass myself too much, I’ll leave it at that.