Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Computable Structures and the Hyperarithmetical Hierarchy
Computable Structures and the Hyperarithmetical Hierarchy
Computable Structures and the Hyperarithmetical Hierarchy
Ebook656 pages

Computable Structures and the Hyperarithmetical Hierarchy

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book describes a program of research in computable structure theory. The goal is to find definability conditions corresponding to bounds on complexity which persist under isomorphism. The results apply to familiar kinds of structures (groups, fields, vector spaces, linear orderings Boolean algebras, Abelian p-groups, models of arithmetic). There are many interesting results already, but there are also many natural questions still to be answered. The book is self-contained in that it includes necessary background material from recursion theory (ordinal notations, the hyperarithmetical hierarchy) and model theory (infinitary formulas, consistency properties).
LanguageEnglish
Release dateJun 16, 2000
ISBN9780080529523
Computable Structures and the Hyperarithmetical Hierarchy

Read more from C.J. Ash

Related to Computable Structures and the Hyperarithmetical Hierarchy

Titles in the series (32)

View More

Mathematics For You

View More

Reviews for Computable Structures and the Hyperarithmetical Hierarchy

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Computable Structures and the Hyperarithmetical Hierarchy - C.J. Ash

    Netherlands.

    Preface

    Julia F. Knight

    0.1 Introduction

    This book explores the relation between definability and computability in mathematical structures. Much of the material that we wish to present grew out of work on the three problems below.

    Problem 1

    For a computable structure A, give syntactical conditions guaranteeing that for all computable copies B, there is a computable isomorphism from A onto B.

    Problem 2

    For a computable structure A, give syntactical conditions guaranteeing that for all computable copies B, all isomorphisms from A onto B are computable.

    Problem 3

    For a relation R on a computable structure A, give syntactical conditions guaranteeing that for all computable copies B and all isomorphisms F from A onto B, F(R) is computably enumerable.

    We may vary the problems by passing to higher levels in the hyperarithmetical hierarchy. Varying Problem 3 in this way, we ask for conditions guaranteeing that in computable copies of A, the image of R . We may also vary the problems by considering arbitrary (instead of computable) copies of A, and relativizing. Varying Problem 3 in this way, we ask for conditions guaranteeing that in an arbitrary copy B of A, the image of R , relative to B. We may ask when the image of R in computable copies is d-c.e. (a difference of computably enumerable sets), or, relativizing, we may ask when the image of R in an arbitrary copy B is d-c.e. relative to B (a difference of relations that are c.e. relative to D(B)). We shall mention some further variants later.

    The reader may or may not find such problems immediately appealing. The results gain interest when applied to various familiar kinds of mathematical structures. In addition, the problems form nicely graded families, some immediately approachable, some not; and they have proved to be valuable in suggesting technical advances. We believe that the ideas and methods developed for these problems should be useful in a wider context.

    The background material needed for a thorough understanding of the results includes certain topics from computability (ordinal notation, the hyperarithmetical hierarchy) and model theory (infinitary formulas, consistency criteria) that are often omitted in basic texts and have not previously been gathered together. We devote several chapters to this background material.

    We give a detailed account of the results on the three basic problems and the specific variants described above. We include other results that can be obtained using the same techniques, and we mention some further variants of the basic problems that have required significant modifications of the techniques. We do not attempt to give an exhaustive list of results. Our aim is to present the main ideas.

    In all of the results on the basic problems and their variants, the syntactical conditions involve computable infinitary formulas. We shall discuss infinitary formulas in general, proving some basic results, such as the Scott Isomorphism Theorem. Then we describe the computable infinitary formulas. Taken all together, the computable infinitary formulas have the same expressive power as the hyperarithmetical infinitary formulas; i.e., those in the least admissible fragment of Lω1ω. (We mention this for those who happen to be familiar with admissible sets.)

    ). In arbitrary structures Brelative to B. Moreover, any formula with this feature is logically equivalent to a computable Σα formula. This fact is one of several expressive completeness results, indicating that for the kinds of problems we are interested in, the computable infinitary formulas say everything that we need to say.

    subset does. We regard this as a theorem of computability theory, and we give a proof using only basic facts taken from the old standard reference on computability by Rogers [134]. Our treatment makes no mention of admissible sets or Kripke-Platek set theory.

    , for an arbitrary computable ordinal α.

    We prove a metatheorem, due to Ash, giving abstract conditions (on a tree and some related objects) guaranteeing the success of a nested priority construction. The proof of the metatheorem involves a rather complicated construction. A proof using the metatheorem has the advantage of being modular. It calls for defining the tree and other objects, and then verifying the list of conditions. Thus, the metatheorem certainly simplifies the process of checking a proof. The process of discovering what might be true—before trying to write out a proof—involves informal, often non-verbal, thinking, together with testing of low-level cases. We do not claim that the metatheorem makes proofs simple. However, it has brought within reach several results that seemed impossible without it, and it has been used successfully by students with little prior experience in writing up priority constructions.

    The general results on the three basic problems, and related results on computable structures, have hypotheses that may at first appear impossibly strong. However, the results do apply to various familiar kinds of structures. The hypotheses involve a single nice copy of the structure. What is required is a thorough understanding of certain back-and-forth relations. We give results calculating these relations for certain classes of structures—in particular, for well orderings and superatomic Boolean algebras. We can show that the computable structures in these classes have nice copies.

    The thesis of this book is that syntactical conditions account for bounds on complexity that are intrinsic (bounds that persist under isomorphism). We emphasize positive results—results supporting this thesis. We illustrate the results with examples involving linear orderings, Boolean algebras, groups, vector spaces, models of arithmetic, etc. We mention only a few negative results. There are by now quite a number of these, involving structures invented to show that certain hypotheses cannot be dropped, certain syntactical conditions are not sufficient, etc. The negative results are certainly interesting, and they involve a great deal of ingenuity in the constructions.

    This book is not a survey of computable structure theory. We focus mainly on results obtained using a particular collection of ideas and techniques. Our hope is that the reader who has mastered the background material and understood the results that we have chosen to present will be in a position to apply these techniques, and to vary them as needed, to solve quite different-looking problems. For a broader view of computable structure theory, we recommend the books by Ershov and Goncharov [43] and Harizanov [64].

    0.2 Constructivizations

    Structures here are countable unless we specifically say otherwise. Moreover, it should be assumed that the language of a structure A is computable, and the universe is a subset of ω (the set of natural numbers), usually computable. This means that we can assign Gödel numbers to elementary first order sentences in the language of A augmented by the elements. Then the atomic diagram of A, denoted by D(A), and the complete diagram, denoted by Dc(A), can be identified with sets of natural numbers. We say that A is computable if D(A) is computable. This is equivalent to saying that the universe of A is computable, and the relations and operations are uniformly computable. We say that A is decidable if Dc(A) is computable.

    Some of the basic questions that interest us were first considered by Ershov [42], Goncharov [53], [54], Nurtazin [129], and others at Novosibirsk, who spoke of constructivizations instead of computable structures, and strong constructivizationsinstead of decidable structures. The following definitions are due to Ershov.

    A constructivization is a pair (A, v), where A is an abstract structure, countable, but with arbitrary objects as elements, and v is a function from w onto the universe of A such that the relation

    between atomic from ω, is computable. A strong constructivization is a pair (A, v), as above, in which the relation

    between arbitrary from ω, is computable. The abstract structure A is said to be constructivizable, or strongly constructivizable if there exists v as above.

    Proposition 0.1

    (a) A (non-empty) structure A has a constructivization if and only if it has a computable copy.

    (b) A (non-empty) structure A has a strong constructivization if and only if it has a decidable copy.

    We give the proof for (a)—the proof for (b) is the same. First, suppose that A has a computable copy B. We obtain a constructivization v = f о g, where f is an isomorphism from B onto A, and g is a computable function from ω onto B. Now, suppose that v is a constructivization of A. The relation v(m) = v(m′) is an equivalence relation on ω, and it is computable, since the formula x = y is atomic. Taking the first representative of each equivalence class, we obtain a computable subset B of ω such that v maps B 1 – 1 onto A. The induced structure B with universe B is a computable copy of A.

    0.3 Authorship and acknowledgements

    This book was not meant to be joint. Chris Ash was supposed to write it. I expected to participate only by nagging him occasionally until it was done. Sadly, Chris died, having written parts of five chapters, plus a tentative list of titles for further chapters. I wanted the book finished. I hope much of the material that Chris would have included is here.

    I am grateful to John Crossley, Sara Miranda, and other colleagues and friends of Ash at Monash University for their hospitality and support. I am grateful to Anne-Marie Vandenberg for locating and typing the part of the book that Chris had written, and for typing my initial drafts of chapters. I am grateful to Manny Lerman and Richard Shore for their encouragement and suggestions.

    I am grateful to my students for their many contributions. Carlos Alegria, Sarah Oates, Kelleen Hurlburt, John Thurber, Alan Vlach, Grzegorz Michalski, Alex McAllister, Charles McCoy, and Andrew Arana have all proved interesting results in computable structure theory. Several of these results are discussed in the book. Some students, Alex McAllister, in particular, suggested numerous corrections and improvements on an early draft.

    Finally, I am grateful to Tim McNicholl for detailed comments on a near-final draft. The errors that remain are all mine.

    Chapter 1

    Computability

    1.1 Informal concepts

    The intuitive notion of an effective procedure for answering a class of questions pervades all of mathematics. With the common use of computers, it scarcely needs explaining, but there are some points which need to be established.

    First, the objects with which one can compute all turn out to be, essentially, finite strings, or sequences, of symbols from a finite alphabet. A finite sequence from a finite alphabet can always be coded as a single natural number. For example, letting 1, 2, 3, and 4 be codes for the symbols (, !, ), and *, respectively, and making use of the uniqueness of prime factorizations, we can code the string ((!)* by the single number 2¹3¹5²7³11⁴. As a result of coding, we may assume that all computations are performed on natural numbers. We shall deal so exclusively with natural numbers that unless otherwise specified, set will mean set of natural numbers, relation will mean relation on natural numbers, and (partial) function will mean (partial) function on natural numbers. The set of natural numbers is denoted by ω.

    Second, it is useful to consider computations with a k-tuple of natural numbers as input. For example, on input (n1, n2, n3), we may compute (n1 + n2) · n3. In principle, the input can be coded as a single number 2n13n25n3. Sometimes we wish to do this, but sometimes we do not.

    Third, in actually programming computers, one is concerned with efficiency. Throughout what follows, we are not. In particular, the method of coding we used above is grossly inefficient. We chose it because of tradition and because it is easy to understand. A related point is that a real computer has only a finite capacity. We shall consider idealized computers, with no limit on the size of the input, or the amount of memory space available for use, or the size of the output.

    Fourth, it is essential to recognize that there are effective procedures that on some inputs yield an output and on others do not. Typical of these procedures are those which involve a search, and yield an output if the search halts, butnot otherwise. To describe such procedures, we take as our basic objects of study partial functions, of one or more variables (where the variables range over natural numbers). The term partial function includes, by the way, total function (one defined for all values of the variables) as a special case. We are interested in partial computable functions.

    We illustrate these notions in a mathematical setting. Hilbert, in the 10th problem on his famous list of 23, asked for an effective procedure for determining, given a polynomial p(x1, …, xk), with integer coefficients, and any number of variables, whether the equation

    has a solution in the integers. There is a natural procedure which gives the answer Yes just in case there is a solution. The procedure involves systematically testing assignments of integers to the variables until a solution appears. If there is no solution, then the testing continues forever.

    Having settled on a method of coding, we may define a function f such that if x is the code of a polynomial, and y is the code of a sequence of integers appropriate to substitute for the variables, then f (x, y) is the absolute value of the integer which results from making the substitution. If it is not the case that x and y are codes of the correct sorts, let f (x, y) = 1. Now, f is a total function which is clearly computable.

    In terms of f, we may define a partial function g, where g(x) is the least y for which f (x, y) = 0, if any, and undefined otherwise. Then g is a partial computable function. Given x, we need only calculate

    in turn, until we come to the least y for which f (x, y) = 0, if there is one, while if there is no such y, the search goes on forever and no output results. Let

    Then h is a total function. From what we have said so far, it is not clear whether h is computable. Given input x, we can compute f (x, 0), f (x, 1), f (x, 2), …, and so produce the answer 1 if there is an appropriate y. However, to compute h, we need some other kind of search, to be performed simultaneously, to produce the answer 0 in case the first search does not end.

    A fifth and last point arises on considering a single mathematical statement whose truth or falsity is presently unknown. One example is Goldbach’s Conjecture, the statement that every even number greater than 4 is the sum of two prime numbers. Let us define the function k, where k(x) = 1 if Goldbach’s Conjecture is true, and 0 otherwise. Any constant function is computable, and k is either the constant function 1 or the constant function 0. Therefore, k is computable. Admittedly, we do not know which procedure works, but that does not come into our reckoning.

    In general, there is always an effective procedure for answering a single question, or any finite number of questions. The procedure consists of consulting a table of answers. We may not know which is the correct table, but that is a separate issue. The non-trivial questions of computability ask whether there is a procedure which applies to all inputs in some infinite set.

    In addition to k-ary functions (that is, functions of k variables), we consider k-ary relations. For example, we may define a 2-ary, or binary, relation R which holds for a pair of numbers (x, y) just in case x + y is even. We write R) if the relation R otherwise. We are interested in computability of relations. We say that a k-ary relation R is computable = (x1, …, xk), yields output Yes or No according to whether the relation holds or not. We may convert the output to numbers and deal with the characteristic function of R, where this is the total k-ary function χR such that

    To say that R is computable is the same as to say that χR is computable.

    A 1-ary, or unary, relation R is identified with the set for which it holds, so we may write R(x) or x R interchangeably. Similarly, a k-ary relation is identified with the set of k= (x1, …, xkR instead of R).

    Varying the notion of computability, we say that R is semi-computable , yields output Yes if R), and otherwise continues forever without giving an output.¹ Converting the output to numbers, we define the semi-characteristic function of R to be the partial function δR such that

    Then R is semi-computable if and only if its semi-characteristic function is computable. Note that if R is computable, then it is semi-computable, because an effective procedure for computing χR can always be modified so that instead of giving output 0, it performs some endless calculation.

    We illustrate, again in the setting of Hilbert’s 10th Problem. Let R(x) hold if x is the code for a polynomial such that the corresponding equation has a solution in the integers, and recall the functions f, g, and h defined above. Clearly, R is semi-computable; we have

    We do not claim that R is computable; χR = h, where the computability of h has been left in doubt.

    1.2 Church’s Thesis

    So far, we have not shown that there is any function or relation which is not in fact computable. To do so, we need a precise definition, or at least some basic properties, of the class of computable functions. There are a number of different definitions. The one due to Turing, in terms of Turing machines, is perhaps the most convincing. (See the classic text by Kleene [80] for a discussion of several definitions, and proofs that they are all equivalent.)

    We refrain here from describing the ingredients of a Turing machine, since we need so little of this information to proceed. Turing machines, and all known computers, operate by combining very simple basic steps according to some finite list of instructions. Turing was responsible for the idea, commonplace today, that it is not necessary to build a new machine whenever we wish to compute a new function. He described a universal machine, taking a program as part of the input, and capable of computing any function which could be computed by another machine.

    The various definitions of the class of computable functions are all equivalent. For a discussion of several different-looking definitions, together with proofs of equivalence, see [80]. Since Turing, no new computation methods have been found which yield functions outside the class captured by Turing’s definition. The inescapable conclusion is that any function which is computable by any means whatever is computable by a Turing machine. This statement, although not in its original form, is attributed to Church.

    Church’s Thesis: Computable = Computable by a Turing machine

    Church’s Thesis asserts that the intuitive notion of being effectively computable agrees with the formal notion of being computable by a Turing machine. It is hard to see how the statement could ever be proved, but it is easy to see how it might be refuted—by discovering a really new method of computation, one that computes new functions. This has not happened in approximately 70 years, and it is generally accepted that Church’s Thesis is true beyond all reasonable doubt.

    Many people adopted a different vocabulary to indicate that their results about computability refer to the formal notion.

    The word recursive comes from the definition of computability due to Kurt Gödel and Stephen Kleene. This definition, more algebraic than Turing’s, involved, among other things, a scheme for obtaining new functions by recursion. The phrase general recursive function sometimes appears in theliterature instead of recursive function. The reason for the word general is to distinguish the notion from the more restrictive class of primitive recursive functions, which Gödel originally called recursive.

    The name recursively enumerable, abbreviated by r.e., arises from a different characterization of the semi-computable relations as the relations R such that there is a computable function which lists, or enumerates (in some order) all k-tuples in R. We shall see shortly that this characterization is equivalent to the one that we chose as a definition—the semi-characteristic function of R is a partial computable function.

    Recently, there has been a move to use the term computable instead of recursive, on the grounds that it is more descriptive. Similarly, recursively enumerable and r.e. are being replaced by computably enumerable and c.e.

    1.3 Proof by Church’s Thesis

    Having emphasized that we are dealing with Turing computable functions, we follow the apparently bizarre course of not showing rigorously that the functions that we claim are computable can in fact be computed by a Turing machine. We accept any description of a computation which accords with our intuitive notion of computability as evidence that the function is Turing computable. This form of argument is known as a proof by Church’s Thesis.

    In the early years of the subject, painstaking efforts were made to show that functions were Turing computable, and Kleene’s characterization was very useful for this purpose. Our justification for omitting all of this effort is that the work has indeed been done, and if there is any failure of Church’s Thesis, it must involve a totally new concept in computation, very different from those which we use. Rogers [134] took this approach, with considerable gains in brevity and clarity. The simplest viewpoint is just to assume that Church’s Thesis is true. In any event, we claim that our results on computable functions hold for Turing computable functions.

    1.4 Some basic results

    In the remainder of this chapter we review some very basic results from computability that we shall be using later. These results can all be found in [134], We need only a few facts about Turing machines. Each Turing machine is determined by a program which is a finite sequence of symbols from a finite language, and so can be coded by a single number, traditionally denoted by e. We adopt some convention so that every number codes some Turing machine, and we refer to the eth Turing machine. On input n, the machine may eventually halt, giving a natural number value, or it may not.

    The eth Turing machine can be regarded as computing a unary partial function, denoted by φe. Thus, we have a list φ0, φ1, φ2, …, of all unary partialfunctions that are Turing computable. There are repetitions, of course, since different Turing machines (corresponding to different programs) may compute the same function. We say that φe is the partial Turing computable function with index e.

    Notation:: If the computation of Turing machine e on input n halts, then we write φe(n) ↓, and otherwise we write φe(n) ↑. If n < r and the computation halts within r steps, then we may write φe,r(n) ↓, and otherwise, we may write φe, r(n) ↑.

    The partial computable function φe(n), of the two variables e and n, can be computed by first decoding e as a Turing machine and then running this machine with input n. As an example of a proof by Church’s Thesis, having just indicated informally how to compute φe(n), we conclude that it is Turing computable. A Turing machine that computes this function is said to be universal.

    Note that there can be no list ψ0, ψ1, ψ2, … of all Turing computable total functions such that the function ψe(n), of the two variables e and n, is computable. For, suppose there is such a list. Then the function f defined by f(n) = ψn(n) + 1 is computable, and hence, by Church’s Thesis, Turing computable. It follows that f is one of the functions ψe. Then we have both f (e) = ψe(e) and f(e) = ψe(e) + 1, a contradiction.

    For partial functions, a diagonal argument similar to the one above yields, not a contradiction, but an interesting fact about the function φn(n), which, by Church’s Thesis, is a perfectly good, Turing computable, partial function of one variable.

    Theorem 1.1

    The partial computable function φn(n) cannot be extended to any total computable function.

    Proof:: Suppose that φn(n) can be extended to a total computable function f (n). Then f (n) + 1 is a computable total function, which must therefore be equal to φe(n), for some e. Since φe is total, φe(e) ↓, so we have

    a contradiction.

    We can argue the existence of non-computable functions purely on the basis of cardinality—there are uncountably many functions and only countably many Turing machines. Applying the previous theorem, we may obtain a specific example of a non-computable function f. Let

    Actually, we described another specific non-computable function earlier. Hilbert’s 10th Problem was to find an effective procedure for computing a certain totalfunction h. It follows from a theorem of Matijasevic [116], [35] (with a somewhat longer proof than the one we have just given for the non-computability of f) that h is not computable.²

    1.5 Coding functions

    As we mentioned, a pair of numbers can be coded as a single number. We may assume that this is done by some standard computable pairing function that maps ω × ω 1 − 1 onto ω. One such pairing function is

    Another is

    For a satisfactory pairing function, the two functions that reclaim the components of a pair, in addition to the pairing function itself, should be reasonably simple to compute. Fixing one such pairing function (to use from here on), we write 〈x, y〉 for the value of the pairing function at (x, y). We denote the component functions by ( )1 and ( )2, so that z = 〈(z)1, (z)2〉. A final property of the two pairing functions above, which may occasionally be helpful, is that

    We may use the pairing function to encode triples by defining 〈x, y, z〉 to be 〈〈x, y〉, z〉 and, repeating this device, we may encode k-tuples for any k. Where it is clear from the context what k is, we may use ( )1, ( )2, …, ( )k to denote the corresponding decoding functions. The coding functions for k-tuples allow us, in principle, to consider only unary computable functions, since the k-ary function

    is clearly computable if and only if the unary function

    to denote the tuple (x1, x2, …, xk) where k is arbitrary or determined by the context.

    In some situations, it is useful to encode finite sequences of all lengths simultaneously. We could do this using the uniqueness of prime factorization. Here is another method, making use of the coding functions for k-tuples for various fixed k. The empty sequence is encoded by 0, and a sequence (x1, x2, …, xk), for k ≥ 1, is encoded by

    Finally, we may wish to encode finite sets. One method is to encode the set {x1, x2, …, xk}, where x1 < x2 < … < xk, by the number

    and encode the empty set by 0. We refer to the code for a set as the canonical index of the set. Noting that each number x is the canonical index for some finite set, we write Dx for the set having canonical index x.

    1.6 Kleene’s Normal Form Theorem

    we let μy denote the least yThus, μy . We refer to this operator, transforming relations into partial functions, as the μ-operator. Clearly, if R, y) is a computable relation, then μy R, y) is a partial computable function, which we compute by testing the values of y, in increasing order. Here is Kleene’s Normal Form Theorem.

    Theorem 1.2

    There exist a computable unary function U and, for each n, an (n+2)-ary computable relation Tn, such that the n-ary partial function computed by the eth Turing machine is UzTn(e, , z)].

    Proof:: Our proof is necessarily only a sketch, since we have refrained from giving the precise details of Turing machines. We can encode a Turing machine configuration, giving the current state and contents of the location being accessed, plus the contents of all storage locations used so far. Since we can encode a single configuration, we can also encode a sequence of configurations.

    We define the relation Tn(e, z) to hold if z is the code for a sequence of configurations, starting with the initial , passing from one configuration to the next by following the instructions for the Turing machine whose code is e, and ending with a halting configuration that gives numerical output. We define the function U(z) to decode the number given as output, or 0, if this is inapplicable.

    Theorem 1.2 is useful in a variety of contexts. For one thing, it implies that a single application of the μ-operator is enough for computing any partial computable function.

    1.7 Facts about c.e. sets and relations

    According to our definition, a relation is c.e. if and only if its semi-characteristic function is a partial computable function. The next result gives another possible definition, saying that a relation is is c.e. if and only if it is a projection of a computable relation.

    Theorem 1.3

    An n-ary relation S) is c.e. if and only if there is a computable relation R, y) such that

    Proof:: First, suppose that S is c.e. By our definition, the semi-characteristic function δs) is a partial computable function. By Theorem 1.2, we have

    for some e. Then

    Therefore, we have

    where R, y) is the computable relation Tn(e, y).

    Now, suppose there is a computable relation R such that

    we check, in turn, whether R, 0), R, 1), R, 2), etc., and if the answer is ever Yes, then we give output 1.

    The next result characterizes computable relations in terms of c.e. relations.

    Theorem 1.4

    A relation S) is computable if and only if both S( ) are c.e.

    Proof:: First, suppose that S) is computable. We have already indicated how a procedure for computing the total function χS can be converted into a procedure for computing the partial function δS. We obtain a procedure for computing δ¬S in a similar way. Now, suppose that both δS and δ¬S are partial computable functions. We can compute χS) by simultaneously applying procedures for computing δS) and δ¬S) until one of them halts, and then giving the appropriate output 1 or 0.

    A set or relation is said to be co-c.e. if its complement is c.e. Thus, Theorem 1.4 says that a relation is computable if and only if it is both c.e. and co-c.e.

    The next result characterizes partial computable functions in terms of c.e. relations.

    Theorem 1.5

    An n-ary partial function f ) is computable if and only if the (n + 1)-ary relation f ) = y is c.e.

    Proof:: If the relation f ) = y is c.e., then by Theorem 1.3, there is a computable relation R such that

    Then we can compute f) by searching for a pair (y, z) such that R, y, z), and giving output y if the search halts. Conversely, if f) is a partial computable function, then by Theorem 1.2, there is some e such that

    By the definition of Tn,

    so by Theorem 1.3, the relation f) = y is c.e.

    The domain of an n-ary partial function f) is an n-ary relation S), where S) if and only if f) is defined. In fact, the usual definition says that a relation is c.e. if it is the domain of a partial computable function. The next result says that this definition is equivalent to ours.

    Theorem 1.6

    A relation S is c.e. if and only if it is the domain of a partial computable function.

    Proof:: By definition, if S is c.e., then it is the domain of the partial computable function δS. Conversely, if S is the domain of a partial computable function f, then a procedure for computing f can be converted into a procedure for computing δS, replacing any output by

    Enjoying the preview?
    Page 1 of 1