Computer Science

Halting Problem

The Halting Problem is a fundamental concept in computer science that states it is impossible to create a program that can determine whether any given program will halt or run indefinitely. This was proven by Alan Turing in 1936, and it has significant implications for the limitations of computation and the development of algorithms.

Written by Perlego with AI-assistance

4 Key excerpts on "Halting Problem"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Simply Turing
    eBook - ePub
    • Michael Olinick(Author)
    • 2021(Publication Date)
    • Simply Charly
      (Publisher)

    ...Each is mathematically equivalent to a Turing Machine. Alan devised a rigorous notion of effective computability based on the Turing Machine. Not only did his paper create computer science, but it also solved Hilbert’s Decision Problem. Turing showed that there are problems—notably the famous “Halting Problem”—that cannot be effectively computed. He proved the impossibility of devising a Turing Machine program that can determine infallibly (and within a finite time) whether a given Turing Machine T will eventually halt, given some arbitrary input I. There are several different proofs of the nonexistence of a Turing Machine that solves the Halting Problem. We give one here that is a Proof by Contradiction. We begin by assuming that such a Turing Machine, call it MH, does exist. Then we show that this assumption leads to a logical contradiction. Suppose that we have this machine MH. If <M, I> is any pair where M is a Turing Machine and I is a possible input tape, then  MH will print “Yes” if M halts on input I and will print “No” if M does not halt on I. Figure 2 provides a schematic picture of how MH works. Figure 2 The input I might itself be a description of a Turing machine so our pair might well be <M, M>. Given the existence of MH, we can create a new machine MH*, which has the following program: 1) Given an input M, run MH on the pair <M, M> 2) If MH prints Yes, then Loop 3) If MH prints No, then Halt Figure 3 is a schematic picture of MH* Figure 3 Observe that MH* on M halts if and only if M on M does not halt. Finally, consider what does MH* do if it is fed itself, MH*? By our program’s rules, MH* on MH* halts if and only if MH* on MH* does not halt. We have arrived at a logical contradiction, showing that the Halting Problem is undecidable. In April 1936, Turing gave Newman a draft of his paper, the first inkling Newman had that Alan had been working on this topic...

  • Logic from A to Z
    eBook - ePub

    Logic from A to Z

    The Routledge Encyclopedia of Philosophy Glossary of Logical and Mathematical Terms

    • John B. Bacon(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    ...H Halting Problem Basic undecidability result in computability theory. It is provable that the Halting Problem does not admit of general solution. To solve the Halting Problem would be to construct a general computer program (more precisely, a Turing machine or register machine) which will correctly determine, of an arbitrary program or machine P in the same language and an arbitrary potential input n, whether P ’s computation will ever halt once n is actually input to P. Part of the import of the Halting Problem lies in the fact that there are many natural, mathematical problems which can be shown unsolvable by comparison with it. See Solvable problem. Hauptsatz See Cut-elimination theorems. Heap, paradox of the See Paradox, sorites. Henkin sentence A sentence H of the language of a formal theory T is said to be a Henkin sentence for T just in case the following is provable in T (where Prov T (x) satisfies the derivability conditions): H ↔ Prov T (⌈ H ⌉). See Derivability conditions. Henkin’s problem A problem posed in 1952 by Leon Henkin. It can be stated as follows. Let T be a standard first-order formal system of arithmetic adequate for the representation of all recursive relations of natural numbers, and let H be a formula of the language of T, which expresses in T the idea that H itself is provable in T. Is H provable in T or independent of T ? Henkin’s problem was solved by Löb in 1955. He showed that any such H is provable in T. See Henkin sentence; Löb’s theorem. Herbrand’s theorem Fundamental normal form result of proof theory, first stated by Herbrand in his 1929 thesis. One may conceive of the theorem as a constructive version of the Löwenheim–Skolem theorem. In a special case, Herbrand’s theorem asserts that an existential sentence is a theorem of classical predicate logic if and only if there is a quasitautology composed solely of instances of the quantifier-free matrix of the sentence...

  • Complexity
    eBook - ePub

    Complexity

    A Philosophical Overview

    • Nicholas Rescher(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)

    ...For with respect to purely theoretical problems it is clear from Turingesque results in algorithmic decision theory (ADT) that there will indeed be computer insolubilia—mathematical questions to which an algorithmic respondent will give the wrong answer or be unable to give any answer at all, no matter how much time is allowed. 4 But this is a mathematical fact which obtains of necessity so that this whole issue can be also set aside for present purposes. For in our present context of generalized problem solving (GPS) the necessitarian facts of Gödel-Church-Turing incompleteness become irrelevant. Here any search for meaningful problem-solving limitations will have to confine its attention to problems that are in principle solvable: demonstrably unsolvable problems are beside the point of present concern because an inability to do what is in principle impossible hardly qualifies as a limitation. The sort of limit that a rigid demonstration of impossibility establishes in our present context is one that bears not so much on question answering as on question askability, seeing that it makes no sense to ask for the demonstrably impossible. For present purposes, then, it is limits of capability not limits of feasibility that matter. In asking about the problem-solving limits of computers we are looking to problems that computers cannot resolve but that other problem solvers conceivably can. The limits that will concern us here are accordingly not rooted in conceptual or logico-mathematical infeasibilities of general principle but rather in performatory limitations imposed upon computers by the world’s modus operandi...

  • Behavior and Culture in One Dimension
    eBook - ePub

    Behavior and Culture in One Dimension

    Sequences, Affordances, and the Evolution of Complexity

    • Dennis Waters(Author)
    • 2021(Publication Date)
    • Routledge
      (Publisher)

    ...The calculation would proceed and once completed the answer would appear on the tape. The procedure repeats until either it completes its calculation by reaching some final state (halting) or, for some computations like calculating the value of π, it continues indefinitely. As tedious and repetitive as it sounds, the power of the concept is its universality. With a suitably defined table of states, a Turing Machine can compute any mathematical function that can be computed, period. “Turing’s machines may be clumsy and slow,” writes computer scientist John Kemeny, “but they present the clearest picture of what a machine can do.” 15 If you can express a problem in mathematical symbols, then a Turing Machine can be designed to compute it. This is called the Church-Turing Thesis; for computer scientists it is the definition of computability. 16 As conceived earlier, Turing Machines do not scale well. Each kind of problem needs its own machine designed for that problem, just like the cell with its army of special-purpose enzymes. To solve all mathematical problems would demand an infinitely large fleet of Turing Machines. The cell partly solves this challenge with allosterism; enzymes can be configured to do more than one thing. 17 Turing solves his scalability problem in much the same way, through what he calls the Universal Turing Machine (“UTM”). It eliminates the need for a fleet of unique machines; one machine can do it all. The UTM can be configured to behave like any special-purpose Turing Machine. Universality is possible because of a clever trick of sequential self-reference: the internal table of state transitions for a Turing Machine can be written in linear arrangements of the same zeros and ones used for its inputs and outputs. In other words, any Turing Machine can be completely described by a sequence on the tape of another Turing Machine, a UTM. 18 Take a simple computation like addition on a desktop calculator...