Abstract 1 Introduction 2 The isomorphism problem for (;𝑺) 3 Automatic successor word structures 4 The agreement problem 5 Is 𝑺 intrinsically regular in (;𝑬𝑺)? 6 Distance functions on ordered words 7 (;𝑺) versus (;) References

Word Structures and Their Automatic Presentations

Xiaoyang Gong ORCID School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China Bakh Khoussainov ORCID School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China Yuyang Zhuge ORCID School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
Abstract

We study automatic presentations of the structures (;S), (;ES), (;), and their expansions by a unary predicate U. Here S is the successor function, ES is the undirected version of S, and is the natural order. We call these structures word structures. Our goal is three-fold. First, we study the isomorphism problem for automatic word structures by focusing on the following three problems. The first problem asks to design an algorithm that, given an automatic structure 𝒜, decides if 𝒜 is isomorphic to (;S). The second asks to design an algorithm that, given two automatic presentations of (;S,U1) and (;S,U2), where U1 and U2 are unary predicates, decides if these structures are isomorphic. The third problem investigates if there is an algorithm that, given two automatic presentations of (;,U1) and (;,U2), decides whether U1U2. We show that these problems are undecidable. Next, we study intrinsic regularity of the function S in the structure Pathω=(;ES). We build an automatic presentation of Pathω in which S is not regular. This implies that S is not intrinsically regular in Pathω. For U, let dU be the function that computes the distances between the consecutive elements of U. We build automatic presentations of (;,U) where dU can realise logarithmic, radical, intermediate, and exponential functions.

Keywords and phrases:
Automatic structures, the isomorphism problem, decidability, undecidability, regular relations
Copyright and License:
[Uncaptioned image] © Xiaoyang Gong, Bakh Khoussainov, and Yuyang Zhuge; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Automata over infinite objects
; Theory of computation Logic
Funding:
The authors acknowledge the support of the NSFC grant 62172077, 62350710215 and Sichuan S&T Department grant 2025HJPJ0006.
Editors:
Paweł Gawrychowski, Filip Mazowiecki, and Michał Skrzypczak

1 Introduction

We investigate automatic presentations of the structures (;), (;S), (;ES), (;,U), (;S,U) and (;ES,U), where ={0,1,2,},   is the natural order, the function S is the successor function, ES is the undirected version of S (that is {k,j}ES iff k+1=j), and U is a unary predicate on . The first structure is the ordinal ω, the second is the successor structure, and the third structure is the infinite path graph Pathω. For the last three structures, without loss of generality, we postulate that 0U and U is infinite. Each of the last three structures with the predicate U determines the infinite binary word αU=αU(0)αU(1)αU(i), where αU(i)=1 if iU and αU(i)=0 otherwise. These binary words are full isomorphism invariants of (;,U), (;S,U), and (;ES,U), i.e., αU1=αU2 if and only if (;S,U1)(;S,U2). Thus, here is a definition:

Definition 1.

Call (;,U), (;S,U), (;ES,U) ordered words, successor words, and graph words, respectively. We also call any of these structures word structures.

Definition 2.

An automatic presentation of the ordered word structure (;,U) is a triple (D;D,UD), where:

  • D and UDD are both FA recognizable languages,

  • D is a binary relation on D recognized by two heads synchronous finite automata, and

  • The structure (D;D,UD) is isomorphic to (;,U).

Automatic presentations of (;S,U) and (;ES,U) are defined similarly.

1.1 Automatic structures

A relational structure (D;R0n1,,Rknk) is automatic if the domain D and the atomic relations Rini (of arity ni) are all finite automata recognizable. We assume the reader is familiar with the basics of finite automata, e.g., runs, acceptance, and recognizability[33]. For the reader, nevertheless, we explain finite automata recognizability of binary relations R on the set of strings Σ over the alphabet Σ. So, let x=x1xn and y=y1ym be two strings over Σ. Then we code the pair (x,y) as the string z=xy so that z=z1zmax{m,n} is defined as follows: zi=(xi,yi) if imin{n,m}, zi=(,yi) if n<im, and zi=(xi,) if m<in, where Σ is a new symbol. The operation is called the convolution operation on strings. The string z=xy is a string over the finite alphabet Σ={(σ,)σΣ}{(,σ)σΣ}Σ×Σ.

Let (R) be the language over Σ obtained by applying the convolution operation a to pair of strings in R. Call R finite automata recognizable if (R) is recognized by a finite automaton. One can view finite automata over Σ as two-head synchronous automata over Σ. The reader can consult standard papers on automatic structures for details and examples, e.g., [1, 28, 7, 32, 49, 31], although Definition 2 suffices for this paper. Finally, in this paper, we use the standard terminology and notations from the theory of finite automata.

The connection between automata, logic and structures traces back to the 1960s when Büchi proved the decidability of S1S -the monadic second-order theory of (;S) -using automata [8]. M. Rabin extended this result, through the use of tree automata, by proving that the monadic second-order theory S2S of the full infinite binary tree is decidable [47]. In computational group theory context, M. Thurston et al.[12] realized that geodesic words within Cannon’s group of hyperbolic isometries are finite automata recognizable and defined the concept of automatic groups, which was extended to Cayley automatic groups in [27] while retaining its tractable essences. These and related papers constitute a foundation of modern research on automata, logic, their interactions, and applications. Also, automatic structures have attracted much attention in the last few decades, see surveys by Grädel [16], Bárány, Rubin, and Grädel [2], F. Stephan [52], and Rubin [50, 36]. Several key factors motivate this study. One is that automatic structures exhibit nice algorithmic and model-theoretic properties. The theory of every automatic structure is decidable. Automatic structures are closed under interpretations in first-order logic [7]. The other aspects involve connections to algebra (e.g., Gromov’s theorem that finitely generated groups of polynomial growth are virtually nilpotent [17]) and additive combinatorics (e.g., Freiman’s theorem that sets with small sums are contained in generalized arithmetic progressions [13]). These connections were revealed in the characterization theorem of finitely generated finite automata presentable groups [46] [45] and the proof that the group (;+) has no automatic presentation [53].

The presentations use finite automata. There are also other presentations of structures: tree automatic presentations [22, 2, 21], computably enumerable presentations [29, 9, 15], co-computably enumerable presentations [39], semi-automatic presentations [24, 23], ω-automatic presentations [19, 26] and unary automatic presentations [30, 35], etc.

1.2 Background and contributions

We describe three, among many, research directions directly related to this work.

1.

One direction focuses on first-order (FO) theories and interpretations of automatic structures in FO-logic and its extensions. For instance, Hodgson [20], Khoussainov and Nerode [28], Blumensath and Grädel [7] proved that the first-order theory of any automatic structure is decidable. This is known as the decidability theorem for automatic structures.

This result was extended to FO-logic with “there exists n many modulo m” quantifiers n,m [38] and “there exist infinitely many” quantifier ω [7]. S. Rubin [50] studied extensions with Ramsey’s quantifiers.  Blumensath and Grädel characterized automatic structures in terms of interpretations [6]. They proved the interpretability theorem that a structure is automatic iff it has an interpretation in the infinite binary tree with two successor relations, the prefix order and the equal length predicates [6]. Finding extensions of the FO-logic that preserve the decidability and interpretability theorems is one of the key topics in the area.

2.

The second direction focuses on the classical isomorphism problem. The problem asks to design an algorithm that, given two automatic structures, decides if the structures are isomorphic. The isomorphism problem pertains to describing the isomorphism invariants and extracting structure theorems. For instance, the Cantor-Bendixson ranks of automatic linear orders turned out to be a useful invariant. The study of Cantor-Bendixson ranks implied that an ordinal is automatic iff it is less than ωω [10]. Using this, Khoussainov, Stephan, and Rubin proved that the isomorphism problem for automatic ordinals is decidable. Thomas and Oliver fully described finitely generated automatic groups [46]. Khoussainov, Nies, Rubin and Stephan characterized and solved the isomorphism problem for automatic Boolean algebras [34]. Grädel and Blumensath were the first to prove that the isomorphism problem for automatic structures is undecidable [7]. Khoussainov, Nies, Rubin, and Stephan proved that the isomorphism problem for automatic structures is Σ11-complete[34]. D. Kuske, M. Lohrey, and J. Liu proved that the isomorphism problems for automatic linearly ordered sets and automatic trees are undecidable, and Π10-complete for automatic equivalence relations [40]. Moreover, B. Khoussainov and M. Ganardi show that the isomorphism problem for automatic equivalence relations on regular domains with polynomial growth is still undecidable[14].

3.

The third direction concerns the intrinsic regularity of relations. A relation R in a structure 𝒜 is intrinsically regular if from any automatic presentation of 𝒜 we can extract a finite automaton that recognizes the relation R in [38]. For instance, all first-order definable relations (with and without parameters) are intrinsically regular. For any first-order logic formula ϕ(x,y¯), automatic structure 𝒜, the set of all p¯A such that |{a𝒜ϕ(a,p¯)}|=kmodn is intrinsically regular [38]. In an automatic partially ordered set of bounded width, the set of all x that belong to an infinite path is also intrinsically regular [37].

Let R be an intrinsically regular relation in 𝒜. Given a formula ϕ(x¯,y¯) we can consider the expression x¯ϕ(x¯,y¯) (binding x¯) with the following semantics:  {b¯{a¯𝒜ϕ(a¯,b¯)}=R}. So, intrinsically regular relations R allow us to define FO+ logic that extends the first order logic with the generalised quantifier . Such extensions preserve the decidability and interpretability theorems for automatic structures. Thus, intrinsic regularity is intimately connected with the questions of extending the decidability and interpretability theorems.

To refute that R is intrinsically regular in 𝒜, we need to build an automatic presentation of 𝒜 in which R is not regular. For instance, consider the k-ary representation of integers, k>1. These presentations make the structure (;S) automatic. In these presentations addition + relation, the relations and mod(n) are regular. Clearly, the relation + is not intrinsically regular in (;S) because in all unary automatic presentations of (;S) the addition is not regular. However, proving that and mod(n) are not intrinsically regular requires non-trivial arguments. Indeed, there are automatic presentations of (;S) in which the mod(n) relations are not regular [38]. Given that the mod(n) relations are intrinsically regular in (;) [38], we conclude that the relation in (;S) is also not intrinsically regular. Moreover, there are presentations of (;S) in which is not regular but all mod(n) are regular [38]. These presentations can be argued to be “exotic” as they show that regularity of the successor S implies neither regularity of nor regularity of mod(n); they also show that regularity of S and mod(n) does not imply regularity of . More examples of non-intrinsically regular relations are in [38] [43] [49]. These examples bring insights into the understanding of regular relations, their interactions within automatic presentations, and the exploration of extensions to decidability and interpretability theorems.

There are certainly other exciting directions of research in automatic structures. We mention the complexity-theoretic issues. For instance, it is important to find exact complexity-theoretic bounds on the first order theories of automatic structures. D. Kuske and M. Lohrey proved that the first order theory of automatic graphs of bounded degree is triple exponential [41]. Blumensath and Gradel proved that there are automatic structures whose first order theories are non-elementary [6]. Also, D. Berdinsky has investigated the existence of automatic presentations for various types of vertex transitive graphs [11] [4] .

Our contributions for this work are listed below.

1. The isomorphism problem for (;𝑺).

Our first isomorphism problem asks if we can decide that given an automatic structure 𝒜 is isomorphic to (;S). Although not explicitly stated in the literature, the question is natural and has been known since 1996 [44]. Note that there is an algorithm that, given an automatic structure (D;f) decides if f satisfies the following two axioms: (a) f is injective, and (b) there is exactly one element yD without f-pre-image. the structure (;S) satisfies these two axioms. However, even if (D;f) satisfies (a) and (b), this does not imply that (D;f) is isomorphic to (;S). We prove that the problem of deciding if an automatic structure is isomorphic to (;S) is undecidable. Our proof shows that this isomorphism problem is Π20-complete. Our solution is based on coding the configuration graphs of total Turing machines (TMs) into automatic presentations of (;S). Recall that a TM, whose inputs are strings over a given alphabet Σ, is called total if for all uΣ the machine M halts on u by either accepting or rejecting u. The set of all total Turing machines is a Π20-complete set [25]. Our solution is based on a careful analysis and coding of the configuration graphs of reversible TMs. Lemma 6 provides the coding. Theorem 7 transforms the configuration graphs of total TMs into the structure (;S). Both, the lemma and the theorem, involve non-trivial reasoning.

Now we explain why the isomorphism problem for (;S) is of particular importance. Let us compare the structure (;S) with automatic ordinal ω which is similar to (;S). First, it is decidable if a given automatic structure 𝒜 is isomorphic to ω. Hence, the isomorphism problem for ω is decidable. Note that every automatic presentation of ω produces an automatic presentation of (;S) while the opposite is not true. Hence, it is natural to ask if one can extend the decidability result for ω to (;S). There is model-theoretic and computability theoretic evidence towards a positive answer. Indeed, in the Lω1,ω language, the structure (;S) is easier to describe than ω. Formally, the Scott rank of (;S) is smaller than the Scott rank of ω 111D. Scott describes every countable structure 𝒜 by an Lω1,ω-sentence up to isomorphism [51]. The Scott rank measures the ordinal complexity of such sentences [18]. The higher the Scott rank of the structure 𝒜, the harder it is to describe it in Lω1,ω-language.. Third, the index set of ω is Π30-complete222 The index set of structure 𝒜 is the class of all Turing machines computing the atomic diagram of 𝒜. [48]. In contrast, the index set of (;S) is a Π20-complete set.  In summary, from a descriptional complexity viewpoint, ω is harder to describe than (;S). Likewise, from the viewpoint of computability, the index set of ω is more complex than the one for (;S). Yet, the isomorphism problem for automatic ω is decidable whereas the isomorphism problem for (;S) is not.

2. The isomorphism problem for automatic successor word structures.

Our second isomorphism problem asks if there exists an algorithm that, given two automatic successor word structures (D1;S1,U1) and (D2;S2,U2), decides if (D1;S1,U1) and (D2;S2,U2) are isomorphic. F. Stephan poses the question in [52]. Just as our first question, this problem has remained unresolved since the start of the theory of automatic structures in 1996 .In Theorem 11, we prove that no algorithm exists that decides this problem.

Here are a few important comments about our solution. First, due to the previous result (that the isomorphism problem for (;S) is a Π20-complete set), it is already undecidable if a given automatic structure is a successor word structure. Therefore, we want to know if there is an algorithm that, under the promise that given automatic structures 𝒜 and are successor words, decides if these structures are isomorphic. In case, when 𝒜 or is not a successor word structure, the algorithm can give any answer (not even terminate). In this sense, solving the problem is more complex than the one for (;S) because we need to consider all algorithms that (1) might not even halt on some inputs, but (2) halt on all inputs from the Π20-complete set of automatic successor word structures. Second, we stress that the solution to the first problem does not, in any direct way, imply a solution to the second problem. We use two new techniques. The first technique, through Lemma 13, reduces the equality problem for TMs to the equality problem of the running times of TMs. The second technique, via Lemma 14, encodes the running times of pairs of TMs (M1,M2) into automatic successor word structures 𝒜(M1,M2). This transformation is a key to the proof. The transformation is not necessarily commutative, that is, the structures 𝒜(M1,M2) and 𝒜(M2,M1) could be non-isomorphic. These two techniques are new and non-trivial.

3. The agreement problem.

Let 𝒜=(D;,U) be an automatic ordered word structure. Let U~ be the image of U under the isomorphism from (D;) onto (;). Clearly, automatic ordered word structures (D1;,U1) and (D2;,U2) are isomorphic iff U~1=U~2.  It is unknown if the isomorphism problem for automatic ordered word structures is decidable. Here we attack the agreement problem, an approximation to the isomorphism problem. The automatic structures (D1;,U1) and (D2;,U2) agree on n if nU~1 and nU~2. If no n exists on which the structures agree then we say that these structures disagree.

The agreement problem asks to design an algorithm that, given two automatic ordered word structures (D1;,U1) and (D2;,U2), decides if these two structures agree on some n. The agreement problem is equivalent to U~1U~2. We prove that the agreement problem is Σ10-complete. The proof utilises two techniques. The first technique non-trivially implements the transformation idea from Lemma 14 used in Theorem 11. Our second technique employs the methods from D. Kuske, J. Liu, and M. Lohrey who showed the undecidability of the isomorphism problem for automatic equivalence relations [40]. Note that automatic ordered word structures (D1;,U1) and (D2;,U2) are isomorphic if and only if (a) the structures (D1;,U1) and (D2;,D2U2) disagree and (b) the structures (D1;,D1U1) and (D2;,U2) disagree. Thus, the agreement problem can viewed as an approximation to the isomorphism problem for automatic ordered word structures.

4. Intrinsic regularity in 𝑷𝒂𝒕𝒉𝝎=(;𝑬𝑺).

One can prove that a unary relation U in Pathω is intrinsically regular if and only if U is finite or co-finite.   The case for binary relations turns out to be more complex. The difficult case, for instance, is the successor relation S. The difficulty of whether or not S is intrinsically regular is attested by the following two facts. First, any automatic presentation of (,S) is an automatic presentation of Pathω. Second, all known automatic presentations of the structure Pathω preserve the regularity of S. It is therefore natural to conjecture that the relation S is an intrinsically regular relation in Pathω. Counter-intuitively, in Theorem 24 we build an automatic presentation of Pathω in which the successor relation S is not regular. Our proof employs techniques from [38] and non-trivially adjusts them for building such an automatic presentation of Pathω. As discussed in the introduction, this result implies that we can not extend the decidability theorem for the Pathω using the generalised quantifier that corresponds to the relation S.

5. Exotic automatic word structures.

Let 𝒜 be an automatic word structure with predicate U. Let u0,u1,u2, be the listing of all natural numbers in U in increasing order, where u0=0. Define dU(n)=un+1un, where n, the distance function realised by 𝒜. The distance functions are full isomorphism invariants of word structures. Thus, the study of automatic word structures is equivalent to the study of the distance functions dU. From Theorem 7, one shows that for any total TM M there is an automatic successor word structure 𝒜 whose distance function bounds from above the running time of M. In contrast, the distance functions in automatic ordered words are bounded by exponential functions. See Corollaries 9 and 10. The interesting case, then, is the study of distance functions in automatic ordered word structures. In Section 6, we show that exponential, intermediate, polynomial, radical, and logarithmic functions can be realized as distance functions.

2 The isomorphism problem for (;𝑺)

We prove that no algorithm exists that, given an automatic structure 𝒜, decides if 𝒜 is isomorphic to (;S). For the proof we recall standard terminology about Turing machines.

A Turing Machine (TM) M is a tuple (Q,Σ,δ,q0,F,R), where Q is the set of states, Σ is the alphabet, δ is a set of transitions of the form (q,b)(q,c,ϵ), q0Q is the initial state and F,RQ are the sets of accepting and rejecting states, respectively. Transitions will also be called instructions of M. The transition (q,b)(q,c,ϵ) is interpreted as when M reads b in state q, moves to state q, writes c in the head position, and moves in the direction of ϵ{l,r} in the tape. We assume that all our TMs are deterministic.

A configuration C of M is a string uqv, where u,vΣ and qQ. This indicates that the tape content is uv, M is at state q, and the head of M is at the first symbol of the string v. An initial configuration is q0v, where vΣ. A configuration uqv is halting if qFR. For configurations C and C, we write CC if there is an instantaneous move of M from C to C through one of the instructions. If CC occurs via instruction i, we also denote this as i(C)=C. There are no transitions from halting configurations.

A run of the machine M is a sequence of configurations C1, C2, , Ci, such that for all j we have CjCj+1. We say that M is total if its run starting at any initial configuration halts (in a halting configuration). For a technical reason, we will assume that our TMs M never stall, that is, for any non-halting configuration C there is a C such that CC. We will transform the TMs M into equivalent (reversible) Turing machines M. The Turing machines M, as opposed to M, will be allowed to stall.

Definition 3.

A configuration C is legal if it belongs to a run that starts at an initial configuration. Else, C is illegal. A maximal run of a TM M is called a chain.

Let (Conf(M),) be the directed graph consisting of all configurations of M and the edge relation .  This graph is automatic. We call this graph the configuration graph of M. Thus, we can use graph-theoretic notations for configuration graphs. For every configuration CConf(M), we have out-deg(C) the cardinality of all outgoing edges from C. Similarly, in-deg(C) is the cardinality of all in-going edges into C.

The non-stalling condition (mentioned above) guarantees that for all configurations C, C is a halting configuration if and only if out-deg(C)=0. Because our Turing machines M are deterministic, for all configurations C of M we have out-deg(C)1.

Definition 4.

A Turing Machine M is called reversible if for all configurations CConf(M) we have in-deg(C)1.

For technical reasons, we can extend the concepts above to multi-tape Turing machines M. For instance, this can be done by coding the contents of the tapes of M into strings, e.g., through the convolution operation (see Section 1.1). To explain this let us assume that M is a two-tape TM. Let the pair (abbqba,baqaa) represent the contents of the first and the second tapes, respectively, where q is a state of M. Note that both tapes have the same state of M. We code this two-tape configuration as the string

[ab][ba][bq][qa][ba][a].

We use this type of coding, based on the convolution operation, to represent each configuration in the configuration graphs of multi-tape Turing machines.

Turing Machines T1 and T2 are equivalent if for all wΣ we have T1 accepts w if and only if T2 accepts w, and T1 rejects w if and only if T2 rejects w. We briefly present the proof of the following known result [3] for the reader’s intuition, as it will be useful later.

Lemma 5.

There exists a transformation r that, given a TM M, outputs a reversible TM Mr equivalent to M.

Proof.

We outline the construction. Let M be a Turing machine. We build the TM Mr as follows. The configurations of Mr are 2-tuples (C,h) where C is a configuration of M and hδ is a sequence of instructions in δ. The transitions of Mr are defined as (C,h)(C,hi) if and only if i(C)=C. Thus, Mr simulates M with instruction i and records the instruction i on the second tape.

The configuration graph (Conf(Mr),) satisfies the following important property:

  • Every chain is either finite or is isomorphic to (;S). The graph (Conf(Mr),) contains no finite cycles.

Note that finite chains have the start and the last configurations. A configuration C is the start (or the bottom) configuration in the chain if in-deg(C)=0. A configuration C is the last (or the top) configuration if out-deg(C)=0.

Note that even if M is total, the configuration graph of Mr might contain infinite chains. We would like to strengthen the lemma above by modifying Mr into a reversible equivalent TM Mτ such that totality of M implies that every chain in Mτ is finite.

Lemma 6.

There is a transformation τ which, given a TM M, outputs Mτ such that:

  1. 1.

    Mτ is reversible.

  2. 2.

    Mτ is equivalent to M.

  3. 3.

    Configuration graph of Mτ consists of finite chains if and only if M is total.

Proof.

The configurations of TM Mτ are (C,h1,h2,D,Cnt), where CConf(M),  h1,h2δ,  D{,}, and Cnt{0,1}. We explain the roles of h1, h2, D, and Cnt:

  1. 1.

    The string h1 is the instruction sequence executed so far (as in Mr) but the head of h1 can be at any position.

  2. 2.

    The string h2 checks if h1 is a consistent sequence of instructions. The head for h2 is at the right end of h2.

  3. 3.

    The symbol D represents the direction of the head for h1. If D=, then the head moves to the right along h1, and to the left otherwise.

  4. 4.

    If Cnt=1, then Mτ simulates M for one step.

We represent configurations as (C,hj¯h′′,h2,D,Cnt) where the underlying symbol indicates the head positions. Initial configurations of Mτ are of the form (q0w,λ,λ,,0) for some wΣ. Thus, hj¯h′′ says that the head position for the tape h1 is at symbol j. There is no need to specify the position of the head for h2 because it is always at the right end of h2. Since the values of D and Cnt are finite, these can be counted as states of Mτ. Therefore, Mτ is a 3-tape Turing Machine. Now we describe transitions of Mτ:

  1. R1:

    (C,h1¯,λ,, 0)(C,h1j¯,λ,, 1), C=j(C).
    Intuition: This mimics one computation step of M.

  2. R2:

    (C,h1j¯,λ,,1)(C,h1j¯,λ,,0).
    Intuition: Preparation to verify if h1 is a valid sequence of instructions.

  3. R3:

    (C,hj1j2¯h′′,h2,,0)(C,hj1¯j2h′′,h2j2,,0), where we have j2(C)=C.
    Intuition: This rule attempts to validate the instruction sequence h1 and records the validation history on tape h2. Note that h2 copies a suffix of h1 in reverse.

  4. R4:

    (C,j2¯h′′,h2,,0)(C,j2¯h′′,h2j2,,0), where j2(C)=C and C is initial configuration.
    Intuition: Just like Rule 2, this is a preparation step to return the head for h1 to the rightmost position.

  5. R5:

    (C,hj1¯j2h′′,h2j3,,0)(C,hj1j2¯h′′,h2,,0), where we have j3=j1 and C=j1(C)
    Intuition: This rule tries to reduce the size of h2 and at the same time validates consistency between h1 and h2.

Thus, R1 is the simulation rule, R2 and R4 are preparation rules, and R3 and R5 are validation rules. Furthermore, if M on input w halts in t steps, then Mτ on input w runs in time proportional to t2. One checks that the construction is correct.

Theorem 7.

The set of all automatic structures (D,E) isomorphic to (;S) is Π20-complete. Hence, the problem if an automatic structure is isomorphic to (;S) is undecidable.

Proof.

Let Tot = {M Turing machine M is total}. This is a standard Π20-complete set [25]. We reduce Tot to the set Is(S)={(D;E)(D;E) is automatic and (D;E)(;S)}.   Note that Is(S) is a Π20-set. Indeed (D;E)Is(S) if and only if E is an injective function (this can be computably verified) and there exists a 0DD such that 0D has no pre-image with respect to E and for every xD there is an n such that En(0D)=x. This is a Π20 definition of Is(S).

Let M be a Turing machine. Consider the equivalent Turing machine Mτ built in Lemma 6. Consider the configuration graph (Conf(Mτ);) of all configurations of Mτ.

Let L be chain in (Conf(Mτ);).

This chain has the first element, first(L), such that every other configuration in L is reachable from first(L) through a finite number of transitions . If L is finite then it has its top element as well. The top element of L is its final element (with no successor), where

L=C1C2Ck and C1=first(L). We introduce C1rrC2rrrCkrr the “reverse” copy of the chain, where Cr is obtained from C by renaming each character c in C by the new character cr. The following lemma utilizes two copies of (Conf(Mτ),) to build an automatic structure (DM,EM) which captures totality of any TM M.

Lemma 8.

For any TM M, there exists an automatic structure (DM,EM) such that the structure (DM,EM) is isomorphic to (;S) if and only if the configuration graph (Conf(Mτ),) consists of only finite chains.

Let M be a Turing machine. We construct the Turing machine Mτ as in Lemma 6. Based on Mτ we build the structure (DM;EM) as follows:

  1. 1.

    The domain DM is the set {CCConf(Mτ)}{CrCConf(Mτ)}.

  2. 2.

    The set EM is the union of the transitions     and  r   together with the following two sets of transitions:

    1. (a)

      Transitions (C,Cr) such that C is the top element of a chain L in Conf(Mτ).

    2. (b)

      Transitions (Cr,C), where C=first(L) for some chain L and C is the length-lexicographic successor of C among the set of all first elements of chains in Conf(Mτ).

One can depict the structure (DM;EM) as follows. Let L1,L2,,Ln, be the list of all chains of the graph (Conf(Mτ),) ordered by the length-lexicographical order of their first elements . Amend this list as follows: L1,L1r,L2,L2r,,Ln,Lnr,. Starting from the first element of L1, The relation EM successively applies transition until the top element of L1 is reached. Once the top element C of L1 is reached, the relation EM makes a transition to Cr, the reverse of C, and then reverses transitions by applying r until the reverse of the first element of L1 is reached. Then EM transits to the first element of L2, and repeats the process333One might suggest to simplify this description and propose that EM moves from the top of Li to the bottom of Li+1 directly. This, indeed, removes the need for reverse configurations Cr and the transitions r. However, this simplification does not guarantee that the relation built in this way is regular..

It is clear that (DM;EM) is an automatic graph. In this graph the in-degree of each element is at most 1. Also, the out-degree of each element is exactly 1. If the Turing machine M is total, then every chain in the space (Conf(Mτ);) is finite. Moreover, the configuration graph excludes cycles. Therefore, EM starting from the first element in L1 reaches all elements in DM. If M is not total, then there must exist an infinite chain L in the directed graph (Conf(Mτ);). Let Li be the first such infinite chain. Hence, once EM is applied to the first element of Li, all the transitions in Li stay inside Li. Hence, no element in Li+1 is reachable from the first element of L1. Therefore (DM;EM) is isomorphic to (;S) if and only if M is total. We proved the lemma, and hence the theorem.

2.1 Distance functions on automatic successor word structures

We apply the results of the previous section to build various types of distance functions for automatic successor word structures. Given 𝒜 an automatic word structure with predicate U, let u0,u1,u2, be the listing of all natural numbers in U in increasing natural order, where u0=0. Recall that the function dU(n)=un+1un, where n, is called the distance function realised by 𝒜. Lemma 6 and Theorem 7 give us tools that code various distance functions in automatic successor word structures.

Let M be a Turing machine with running time TM. Construct the Turing machine Mτ as in Lemma 6. Then for all x{0,1} we have  TM(x)TMτ(x)CTM(x)3, where C1 is a fixed constant. Now consider the automatic successor word structure (DM;EM,U), where (DM;EM) is constructed in Theorem 7, and U={0,1}DM. From the construction of (DM;EM) and the unary predicate U, it is not hard to see that   2CTM(un)3dU(n), where un is the length-lexicographic order of {0,1}.

Corollary 9.

For any total Turing machine M there exists an automatic successor word structure 𝒜 such that the distance function dU of 𝒜 bounds the running time TM.

Corollary 10.

There exist automatic successor word structures whose distance functions are non-elementary. (This corollary strengthens and generalizes the results in [5])

3 Automatic successor word structures

Our interest is in the set Is(SW) of all pairs (𝒜,) such that both 𝒜 and are isomorphic automatic successor word structures. The question is this: Is there an algorithm that, given two automatic successor word structures, decides if these structures are isomorphic? This is a promise problem in the sense that we care about those input that are word automatic successor structures. If any of the inputs is not a word automatic structure, then we do not care about the output of the algorithm. See our contribution in the Introduction Section.

Now we prove that this isomorphism problem for automatic successor word structures is undecidable. Our proof uses the construction of the structures (DM;EM) from Theorem 7.

Theorem 11.

The isomorphism problem for automatic successor word structures is undecidable.

Proof.

Our goal is to show that no algorithm exists that, given two automatic presentations of successor word structures, decides if these structures are isomorphic. In our proof, we use the operator τ defined in Lemma 6. By this lemma, we always assume that our Turing machines are reversible. Moreover, a Turing machine M is total if and only if, in the configuration graph (Conf(Mτ);) of M, all chains are finite. Our proof also uses the following folklore fact from computability theory [25]:

Lemma 12.

No algorithm exists that given two total Turing machines M1 and M2 decides if M1 is equivalent to M2.

We assume that the outputs of Turing machines are either accept or reject. Let M be a total Turing machine (over an alphabet Σ). For each wΣ, let TM(w) be the number of steps taken for M to halt its computation on w. Then the following lemma can be established upon Lemma 12:

Lemma 13.

No algorithm exists that given two total TMs M1 and M2 decides if TM1=TM2.

Let M and M be Turing machines. For M and M, we build the structure 𝒜(M,M) using (DM;EM) and (DM;EM) as constructed in the proof of Theorem 7.

We assume that the domains DM and DM are disjoint. We build 𝒜(M,M) as follows.

  1. 1.

    The domain D(M,M) is DMDM.

  2. 2.

    The unary predicate U(M,M) is the set of all initial configurations of M and M.

  3. 3.

    To build the binary relation S(M,M) we need a few notations. In the structures (DM;EM) and (DM;EM), respectively, let us list all the chains ordered according to the length-lexicographic order on the first elements:

    L1,M,L1,Mr,,Li,M,Li,Mr, and L1,M,L1,Mr,,Li,M,Li,Mr,.

    In each of these lists consider the sub-sequences of chains whose first elements are legal configurations: Li1,M,,Lik,M, and Lj1,M,,Ljk,M,. For these sub-sequences, the set of first elements of the chains are regular. Denote these first elements as: i1,M,,ik,M, and j1,M,,jk,M,. Define the relation S(M,M) by:

    • S(M,M)(ik,Mr)=jk,M [ Simulate M on the same input as M]

    • S(M,M)(jk,Mr)=EM(ik,Mr) [ Both M and M are simulated on same input.]

    • S(M,M)(EM1(ik+1,M))=EM(jk,Mr)

    • S(M,M)(EM1(jk+1,M))=ik+1,M

    • In other cases where no rule can be matched , the successor function will just follow EM,EM, i.e.

      S(M,M)(w)=EM(w),wDMS(M,M)(w)=EM(w),wDM
Lemma 14.

𝒜(M,M) is automatic. If M and M are total, then 𝒜(M,M) is a successor word structure.

Lemma 15.

Let M1 and M2 be total. Then TM1=TM2 iff 𝒜(M1,M2)𝒜(M2,M1).

If TM1=TM2 then 𝒜(M1,M2) and 𝒜(M2,M1) are isomorphic. Assume that TM1TM2. Let w be the first element, in the length-lexicographic order, such that TM1(w)TM2(w). Assume that ik,M1 is the initial configuration that corresponds to w. Then in the structure 𝒜(M1,M2), the elements ik,M1 and jk,M2 belong to the unary predicate. The distance between these two elements ik,M1 and jk,M2 equals 2TM1(w)+1. Similarly, the elements jk,M2 and ik,M1 belong to the unary predicate in 𝒜(M2,M1). The distance between jk,M2 and ik,M1 in 𝒜(M2,M1) equals 2TM2(w)+1. Obviously, these two distances are not equal as TM1(w)TM2(w). If there is an isomorphism f from 𝒜(M1,M2) onto 𝒜(M2,M1), then for the isomorphism f we must have f(ik,M1)=jk,M2 and f(jk,M2)=ik,M1. This is impossible because the distances between these pairs of elements are not equal.

4 The agreement problem

Recall that the ordered word structures are structures isomorphic to (;,U), where is the natural order and U is a unary predicate. The isomorphism problem for automatic ordered word structures is a natural follow-up to the isomorphism problem for automatic successor word structures. V. Bárány was the first who asked and investigated the isomorphism problem for automatic ordered word structure [1][Question 5.9.3]. The techniques used in addressing the isomorphism problem for automatic successor word structures can not be applied in this case. One reason is that the distance functions dU in automatic ordered word structures are bounded by exponential functions while the function dU for automatic successor word structures can bound from above any given computable function (see Corollary 9). Another reason is that the set of all automatic ordered word structures is decidable as opposed to Π20-completeness of the set of all automatic presentations of successor word structures.

Let 𝒜=(D;,U) be an automatic ordered word structure. Let U~ be the image of U in under the isomorphism from (D;) onto (;).

Proposition 16.

Two word structures 𝒜1=(D1;,U1) and 𝒜2=(D2;,U2) are isomorphic if and only if U~1=U~2.

In this section we attack the following agreement problem.

Definition 17.

Automatic word structures 𝒜1=(D1;,U1) and 𝒜2=(D2;,U2) agree if the set U~1U~2 is a non-empty set. Otherwise, we say that these structures disagree.

The agreement problem asks to design an algorithm that given automatic ordered word structures 𝒜1=(D1;,U1) and 𝒜2=(D2;,U2) decides if these two structures agree.

Theorem 18.

The agreement problem for automatic ordered word structures is Σ10-complete.

Proof.

Let +={0}. We set [x¯] to be the set of all polynomials over in x¯ variables. We use the following theorem from [42] that solves Hilbert’s 10th problem.

Theorem 19.

For any Σ10-complete X+, there exist two polynomials p1(x,x¯) and p2(x,x¯) from [x,x¯] such that nX if and only if for some x¯ we have p1(n,x¯)=p2(n,x¯).

From now on we fix k=|x¯| which witnesses this theorem. This theorem implies Σ10-completeness of the following set: Agr={(p1,p2)p1,p2[x¯]&x¯(p1(x¯)=p2(x¯))}.

We define a computable bijection σ=cntπ:+k, where :

  • The function π:(x1,,xk)a1x1akxk encodes tuples as words in a1+ak+;

  • The function cnt maps each word a1x1akxk to its position in the length-lexicographic () ordering of a1+ak+.

Using σ, given two polynomials p1,p2[x¯], we define the following function:

ψi(x¯)=pi(x¯)+σ(z¯)<σ(x¯)p1(z¯)+p2(z¯).

Let U1 and U2 be the ranges of ψ1 and ψ2, respectively. As the sets U1 and U2 depend on p1 and p2 which we write as U1(p1,p2) and U2(p1,p2), respectively. Here is a lemma that connects the set Agr with the sets U1 and U2.

Lemma 20.

For p1,p2[x¯], we have the following. The set U1U2 is non-empty if and only if the polynomials p1 and p2 agree on some x¯, that is, (p1,p2)Agr.

Consider the following structures: (;,U1(p1,p2)) and (;,U2(p1,p2)). These can be viewed as ordered counterparts of 𝒜(1,2) and 𝒜(2,1) from Theorem 11.

Lemma 21.

For any given polynomials p1,p2[x¯], the two structures (;,U1(p1,p2)) and (;,U2(p1,p2)) have automatic presentations.

Proof.

We use the ideas from D. Kuske et al. [40]. Let =(S,Δ,I,F) be an NFA over alphabet Σ. For wΣ, let Run(,w) be the set of all accepting runs of on w. Let Run() be all accepting runs of . The following is proved in [40].

Lemma 22.

For any polynomial p(x¯)[x¯], we can build a non-deterministic finite automaton p over the alphabet Σk such that L(p)=a1+ak+ and for all (x1,,xk)+k, we have p(x¯)=|Run(p,a1x1akxk)|.

Let p1,p2[x¯]. Let 1 and 2 be the NFA constructed according to the lemma above. So, 1=p1 and 2=p2. We can assume that the sets of states of these two automata are disjoint. Hence, Run(1)Run(2)=.

Now we build an automatic presentation 𝒜1=(D1;1,S1) of (;,U1(p1,p2)). An automatic presentation 𝒜2 of the structure (;,U2(p1,p2)) is built similarly.

  1. 1.

    The domain D1 is the set Run(1)Run(2). The domain D1 is a regular set.

  2. 2.

    To define the linear order 1 on D1 we use the relation R={(r,w)rRun(,w)}. Clearly, the relation R is regular. If (r,w)R, then we write label(r)=w. Given two elements (accepting runs) r1 and r2 from D1, we define the order 1 as follows:

    1. (a)

      When label(r1)=label(r2):

      • If r11 and r22, then r1<1r2.

      • For runs r1 and r2 of the same automaton, r11r2 whenever r1r2.

    2. (b)

      If label(r1)<label(r2), the ordering is defined as r1<1r2.

    It is not hard to see that the relation 1 is regular.

Thus, we have built an automatic structure (D1;1). This structure is isomorphic to the linearly ordered set (;) via the bijective function ξ[p1,p2]:D1 defined by

ξ[p1,p2](n)=r if |{rr1r}|=n.

Let S1 be the image of the unary relation U1(p1,p2) in under the isomorphism ξ[p1,p2]. Our goal is to show that S1 is a regular set. If so, then we will have built the automatic presentation (D1;1,S1) of the structure (;,U1(p1,p2)).

The remaining part is to prove the regularity of S1. We claim that S1 is identical to the following set {r1t1(label(t)=label(r)(t1r)}.

Indeed, the set S1 collects every rs which are greater than any other run rw1 with the same label label(rw)=label(rs)=w. Moreover, no two distinct runs in S1 can share the same label w. Thus, it’s sufficient to prove that rs=ξ[p1,p2](ψ1(π1(w)) for any input w=label(rs).

Note that {rr1rs} can be partitioned into two part:

  • {rlabel(r)<rs}, whose cardinality is the sum w<w|{rD1|label(r)=w}|.

  • {r1label(r)=w,r1rs} which equals to {r1label(r)=w} due to maximality of rs.

The following facts hold for any input w:

  1. 1.

    |{r1label(r)=w}|=|Run(1,w)|=p1(π1(w)).

  2. 2.

    |{r2label(r)=w}|=|Run(2,w)|=p2(π1(w)).

  3. 3.

    |{rD1label(r)=w}|=p1(π1(w))+p2(π1(w))

We finally conclude that |{r|r1rs}|=σ(w)<σ(w)p1(π1(w))+p2(π1(w))+p1(π1(w)). Therefore, rs=ξ[p1,p2](ψ1(π1(w))) holds for all w. Thus, with Lemma 20 and Lemma 21 in our hand, we have proved the theorem.

Corollary 23.

Given two automatic ordered word structures 𝒜1 and 𝒜2, it’s Π10-complete to decide if U~1U~2.

5 Is 𝑺 intrinsically regular in (;𝑬𝑺)?

Any automatic presentation of (;S) is an automatic presentation of (;ES). We prove that the reverse does not hold. This is counter-intuitive. Indeed, in a computability-theoretic setting, the computability of ES implies the computability of S. Contrasting this, our result shows that it is not possible to build finite automata for S from finite automata recognizing the relation ES. Our proof non-trivially adapts ideas from [38, Theorem 4.1, page 447].

Theorem 24.

There is an automatic presentation of structure (;ES) in which the successor relation S is not regular.

Proof.

The domain D of the desired automatic structure is Σ with Σ={0,1}. In our construction, we will use the following three auxiliary functions from [38]:

  1. 1.

    Every binary string x is uniquely decomposed into:

    • ep(x):ΣΣ : bits of x at even positions (indexing starts at 0).

    • op(x):ΣΣ : bits of x at odd positions

    Thus, x and its pair ep(x),op(x) are equivalent representations.

  2. 2.

    Let nx=|ep(x)| and mx=|op(x)|. Thus, nx=mx or nx=mx+1 and |x|=nx+mx. Both ep(x) and op(x) are interpreted as natural numbers written in the least-significant-bit-first binary form.

  3. 3.

    Define next:ΣΣ as: next(x)=(ep(x)+2op(x)+1)mod2nx.

Let z be a binary string. Call z the start-point if ep(z)=0. Our fourth auxiliary function f:ΣΣ is the following:

  1. (a)

    If x is such that nx2 then f(x) is the length-lexicographic successor of x.

  2. (b)

    If next(x),op(x) is not a start-point, then f(x)=next(x),op(x).

  3. (c)

    If next(x),op(x) is a start-point and op(x)<2mx1 then f(x)=y, where |y|=|x|,ep(y)=0nx and op(y)=op(x)+1

  4. (d)

    If next(x),op(x) is a startpoint and op(x)=2mx1 then we have f(x)=0nx+mx+1.

We informally explain f. Think of op(x) as a parameter of input x viewed as ep(x),op(x). Starting at ep(x)=0nx, f consecutively adds 2op(x)+1 to ep(x). Once, ep(x) runs through all elements in {0,,2nx1} and comes back to 0nx, the value op(x) is incremented by 1.

For string x=ep(x),op(x), one can inductively check the fact that there is a unique 0u2nx1 such that ep(x)=u(2op(x)+1)mod2nx. Based on this, we define function U(x)=u which, given the string x, outputs the solution to the equation ep(x)=u(2op(x)+1) modulo 2nx. So, U(x) is the distance between x and the start-point 0nx,op(x).

Set U>={xU(x)>2nx1}, which has the following property in our construction:

Lemma 25 ([38]).

The unary predicate U> is not regular.

Using Lemma 25, we now define the desired function g:

  1. 1.

    If nx2, then g(x) is the successor of x with respect to length-lexicographic ordering.

  2. 2.

    If 0<U(x)<2nx1, then g(x)=f1(x).

  3. 3.

    If U(x)=0, then g(x)=10nx1,op(x).

  4. 4.

    If 2nx1U(x)<2nx1, then g(x)=f(x).

  5. 5.

    If (U(x)=2nx1)(op(x)<2mx1), then g(x)=f1(10nx1,op(x)+1).

  6. 6.

    If (U(x)=2nx1)(op(x)=2mx1), then g(x)=f1(10nx+mx).

The function g is a successor on the domain {0,1}. Call the rules (2) and (4) above non-regular rules. Consider now the graph of g: Graphg(x,y)={(x,y)y=g(x)}. The non-regular rules, by Lemma 25, guarantee that the successor function g is not a regular relation. Thus, we have a non-automatic presentation ({0,1},g) of the successor structure.

Consider the edge relation Eg defined by g. We show that Eg is regular. For this, consider the rules (3), (5), and (6). Call them regular rules as they are finite automata recognizable. The edge relation Eg can be defined as follows. Consider a pair {x,y}.

  1. 1.

    If U(x),U(y){0,2nx1,2nx1}, then E(x,y) holds when f(x)=yf(y)=x.

  2. 2.

    If U(x)=0, then E(x,y) holds when (U(y)=1U(y)=2nx1)op(x)=op(y).

  3. 3.

    If U(x)=2nx1, then E(x,y) holds when y=f(x)(U(y)=0op(x)=op(y)).

  4. 4.

    If U(x)=2nx1op(x)<2mx1, then E(x,y) holds when x=f(y)(U(y)=2nx11op(y)=op(x)+1).

  5. 5.

    If U(x)=2nx1op(x)=2mx1, then E(x,y) holds when x=f(y)(U(y)=2nx11op(y)=0|y|=|x|+1).

The regularity of these rules depends on the regularity of the unary relation U(x){0,2nx1,2nx1,2nx11} given by the regular rules (3), (5) and (6) in the definition of g. More formally, this unary relation can be recognised as follows:

  1. 1.

    U(x)=0 if ep(x)=0.

  2. 2.

    U(x)=2nx1 if ep(x)10.

  3. 3.

    U(x)=2nx1 if next(x),op(x) is a startpoint.

  4. 4.

    U(x)=2nx11 if y(U(y)=2nx1f(x)=y).

So, the relation Eg is regular. The theorem is proved.

6 Distance functions on ordered words

The distance functions on automatic successor word structures can bound, by Corollary 9, any computable function. In contrast, the distance functions on automatic ordered word structures are bounded by an exponential function. Hence, one would like to study distance functions realised by automatic ordered word structures.

A radical function is r(n)=na where a>1. A function f: is intermediate if asymptotically f is strictly between 2n and any polynomial with non-negative coefficients. An example is bna, where a,b, a,b2.

Theorem 26.

Polynomials, exponents, radical, logarithmic, and intermediate functions can all be realised on automatic word ordered structures as distance functions.

7 (;𝑺) versus (;)

Let Ma={nn is a multiple of a}, where n1. In known automatic presentations of (;S), the relation is regular. The unary predicate Ma is definable in the extension of the FO-logic based on and with “there exists n many mod m” quantifier n,m. Hence, if is regular, then so is the set Ma [38]. Known automatic presentations (D;SD) of (;S) (over alphabet Σ) in which is not regular[38] have the following two properties:

  • Sparseness Property: The function n|DΣn| is polynomial.

  • Unboundedness Property: The length difference between the pairs (a,b) with ab and |b|<|a| is unbounded.

The Sparseness Property is used to show that Ma, a1, are all regular. The Unboundedness property is the key to show that is not regular. Two questions arise. The first question asks if “sparseness” is necessary to show that the sets Ma are regular. The second asks if the unboundedness property is necessary to show that is not regular in automatic presentations of the successor structure.  The following theorem answers the first question.

Theorem 27.

There is an automatic presentation ({0,1};S) of (;S) such that

  1. 1.

    The transitive closure S of S is not regular.

  2. 2.

    For all integers a1 the sets Ma are regular.

To answer the second question, we have the next definition:

Definition 28.

A pair (a,b) in an automatic presentation (D;SD) of (;S) is non-standard if aSDb and |a|>|b|, where SD is the transitive closure of SD.

The following proposition is easy to prove.

Proposition 29.

For any presentation (D;SD) of (;S), if the length difference between the components of non-standard pairs is unbounded, then (D;SD,SD) is not automatic.

Now we answer the second question.

Theorem 30.

There is an automatic presentation of (D;SD) of (;S) with no non-standard pairs such that (D;SD) is not automatic but (D;SD,M2,M3,M4,) is automatic.

References

  • [1] Vince Bárány. Automatic presentations of infinite structures. Phd thesis, RWTH Aachen University, Germany, 2007. URL: http://darwin.bth.rwth-aachen.de/opus3/volltexte/2007/2019/.
  • [2] Vince Bárány, Erich Grädel, and Sasha Rubin. Automata-based presentations of infinite structures. In Javier Esparza, Christian Michaux, and Charles Steinhorn, editors, Finite and Algorithmic Model Theory, volume 379 of London Mathematical Society Lecture Note Series, pages 1–76. Cambridge University Press, 2011.
  • [3] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17(6):525–532, 1973. doi:10.1147/rd.176.0525.
  • [4] Dmitry Berdinsky and Khoussainov Bakh. Cayley automatic representations of wreath products. International Journal of Foundations of Computer Science, 27(02):147–159, 2016. doi:10.1142/S0129054116400049.
  • [5] Dmitry Berdinsky, Sanjay Jain, Bakh Khoussainov, and Frank Stephan. String compression in fa-presentable structures. Theoretical Computer Science, 947:113705, 2023. doi:10.1016/j.tcs.2023.113705.
  • [6] Achim Blumensath and Erich Grädel. Automatic structures. In 15th Annual IEEE Symposium on Logic in Computer Science, Santa Barbara, California, USA, June 26-29, 2000, pages 51–62. IEEE Computer Society, 2000. doi:10.1109/LICS.2000.855755.
  • [7] Achim Blumensath and Erich Grädel. Finite presentations of infinite structures: Automata and interpretations. Theory Comput. Syst., 37(6):641–674, 2004. doi:10.1007/s00224-004-1133-y.
  • [8] J. Richard Büchi. Weak second-order arithmetic and finite automata. Mathematical Logic Quarterly, 6(1-6):66–92, 1960. doi:10.1002/malq.19600060105.
  • [9] Cristian Calude, Peter Hertling, Bakh Khoussainov, and Yongge Wang. Recursively enumerable reals and chaitin omega numbers. Theor. Comput. Sci., 255(1-2):125–149, 2001. doi:10.1016/S0304-3975(99)00159-0.
  • [10] Christian Delhommé. Automaticité des ordinaux et des graphes homogènes. Comptes Rendus Mathematique, 339(1):5–10, 2004. doi:10.1016/j.crma.2004.03.035.
  • [11] Berdinsky Dmitry and Bakh Khoussainov. On automatic transitive graphs. In Arseny M. Shur and Mikhail V. Volkov, editors, Developments in Language Theory, volume 8633 of Lecture Notes in Computer Science, pages 1–12, Cham, 2014. Springer International Publishing. doi:10.1007/978-3-319-09698-8_1.
  • [12] David B. A. Epstein, M. S. Paterson, J. W. Cannon, D. F. Holt, S. V. Levy, and W. P. Thurston. Word Processing in Groups. A. K. Peters, Ltd., USA, 1992.
  • [13] G. A. Freĭman. Addition of finite sets. Sov. Math., Dokl., 5:1366–1370, 1964.
  • [14] Moses Ganardi and Bakh Khoussainov. Automatic equivalence structures of polynomial growth. In Maribel Fernández and Anca Muscholl, editors, 28th EACSL Annual Conference on Computer Science Logic, CSL 2020, January 13-16, 2020, Barcelona, Spain, volume 152 of LIPIcs, pages 21:1–21:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.CSL.2020.21.
  • [15] Alex Gavryushkin, Bakh Khoussainov, and Frank Stephan. Reducibilities among equivalence relations induced by recursively enumerable structures. Theor. Comput. Sci., 612:137–152, 2016. doi:10.1016/j.tcs.2015.11.042.
  • [16] Erich Grädel. Automatic structures: Twenty years later. In Holger Hermanns, Lijun Zhang, Naoki Kobayashi, and Dale Miller, editors, LICS ’20: 35th Annual ACM/IEEE Symposium on Logic in Computer Science, Saarbrücken, Germany, July 8-11, 2020, pages 21–34. ACM, 2020. doi:10.1145/3373718.3394734.
  • [17] Michael Gromov. Groups of polynomial growth and expanding maps (with an appendix by Jacques Tits). Publications Mathématiques de l’IHÉS, 53:53–78, 1981. doi:10.1007/BF02698687.
  • [18] Matthew Harrison-Trainor. An introduction to the scott complexity of countable structures and a survey of recent results. Bull. Symb. Log., 28(1):71–103, November 2022. doi:10.1017/bsl.2021.62.
  • [19] Greg Hjorth, Bakh Khoussainov, Antonio Montalbán, and André Nies. From automatic structures to borel structures. In Proceedings of the Twenty-Third Annual IEEE Symposium on Logic in Computer Science, LICS 2008, 24-27 June 2008, Pittsburgh, PA, USA, pages 431–441. IEEE, IEEE Computer Society, 2008. doi:10.1109/LICS.2008.28.
  • [20] Bernard R. Hodgson. Théories décidables par automate fini. PhD thesis, Université de Montréal, 1976.
  • [21] Sanjay Jain, Bakh Khoussainov, Philipp Schlicht, and Frank Stephan. Tree-automatic scattered linear orders. Theoretical Computer Science, 626:83–96, 2016. doi:10.1016/j.tcs.2016.02.008.
  • [22] Sanjay Jain, Bakh Khoussainov, Philipp Schlicht, and Frank Stephan. The isomorphism problem for tree-automatic ordinals with addition. Information Processing Letters, 149:19–24, 2019. doi:10.1016/j.ipl.2019.05.004.
  • [23] Sanjay Jain, Bakh Khoussainov, and Frank Stephan. Finitely generated semiautomatic groups. Comput., 7(2-3):273–287, 2018. doi:10.3233/COM-180089.
  • [24] Sanjay Jain, Bakh Khoussainov, Frank Stephan, Dan Teng, and Siyuan Zou. Semiautomatic structures. Theory Comput. Syst., 61(4):1254–1287, 2017. doi:10.1007/s00224-017-9792-7.
  • [25] Hartley Rogers Jr. Theory of recursive functions and effective computability (Reprint from 1967). MIT Press, Cambridge, MA, USA, 1987. URL: http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3182.
  • [26] Lukasz Kaiser, Sasha Rubin, and Vince Bárány. Cardinality and counting quantifiers on omega-automatic structures. In Susanne Albers and Pascal Weil, editors, STACS 2008, 25th Annual Symposium on Theoretical Aspects of Computer Science, Bordeaux, France, February 21-23, 2008, Proceedings, volume 1 of LIPIcs, pages 385–396. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Germany, 2008. doi:10.4230/LIPIcs.STACS.2008.1360.
  • [27] Olga Kharlampovich, Bakh Khoussainov, and Alexei Miasnikov. From automatic structures to automatic groups. Groups, Geometry, and Dynamics, 8(1):157–198, 2014. doi:10.4171/GGD/221.
  • [28] Bakh Khoussainov and Nerode Anil. Automatic presentations of structures. In Daniel Leivant, editor, Logic and Computational Complexity, pages 367–392, Berlin, Heidelberg, 1995. Springer Berlin Heidelberg. doi:10.1007/3-540-60178-3_93.
  • [29] Bakh Khoussainov, Steffen Lempp, and Theodore A. Slaman. Computably enumerable algebras, their expansions, and isomorphisms. Int. J. Algebra Comput., 15(3):437–454, 2005. doi:10.1142/S0218196705002281.
  • [30] Bakh Khoussainov, Jiamou Liu, and Mia Minnes. Unary automatic graphs: An algorithmic perspective. In Manindra Agrawal, Ding-Zhu Du, Zhenhua Duan, and Angsheng Li, editors, Theory and Applications of Models of Computation, 5th International Conference, TAMC 2008, Xi’an, China, April 25-29, 2008. Proceedings, volume 4978 of Lecture Notes in Computer Science, pages 542–553, Berlin, Heidelberg, 2008. Springer. doi:10.1007/978-3-540-79228-4_47.
  • [31] Bakh Khoussainov and Mia Minnes. Three lectures on automatic structures. In Proceedings of Logic Colloquium, 2007.
  • [32] Bakh Khoussainov and Mia Minnes. Model-theoretic complexity of automatic structures. Annals of Pure and Applied Logic, 161(3):416–426, 2009. Papers presented at the Symposium on Logical Foundations of Computer Science 2007. doi:10.1016/j.apal.2009.07.012.
  • [33] Bakh Khoussainov and Anil Nerode. Automata Theory and its Applications, volume 21. Springer Science & Business Media, 2001. doi:10.1007/978-1-4612-0171-7.
  • [34] Bakh Khoussainov, André Nies, Sasha Rubin, and Frank Stephan. Automatic structures: Richness and limitations. Log. Methods Comput. Sci., 3(2), 2007. doi:10.2168/LMCS-3(2:2)2007.
  • [35] Bakh Khoussainov and Sasha Rubin. Graphs with automatic presentations over a unary alphabet. J. Autom. Lang. Comb., 6(4):467–480, April 2001. doi:10.25596/jalc-2001-467.
  • [36] Bakh Khoussainov and Sasha Rubin. Automatic structures: overview and future directions. J. Autom. Lang. Comb., 8(2):287–301, April 2003. doi:10.25596/jalc-2003-287.
  • [37] Bakh Khoussainov, Sasha Rubin, and Frank Stephan. On automatic partial orders. In Proceedings of the Eighteenth Annual IEEE Symposium on Logic in Computer Science (LICS 2003), pages 168–177. IEEE Computer Society Press, June 2003. doi:10.1109/LICS.2003.1210056.
  • [38] Bakh Khoussainov, Sasha Rubin, and Frank Stephan. Definability and regularity in automatic structures. In Volker Diekert and Michel Habib, editors, STACS 2004, 21st Annual Symposium on Theoretical Aspects of Computer Science, Montpellier, France, March 25-27, 2004, Proceedings, volume 2996 of Lecture Notes in Computer Science, pages 440–451. Springer, 2004. doi:10.1007/978-3-540-24749-4_39.
  • [39] Bakh Khoussainov, Theodore A. Slaman, and Pavel Semukhin. P01-presentations of algebras. Arch. Math. Log., 45(6):769–781, 2006. doi:10.1007/s00153-006-0013-3.
  • [40] Dietrich Kuske, Jiamou Liu, and Markus Lohrey. The isomorphism problem on classes of automatic structures with transitive relations. Transactions of the American Mathematical Society, 365(10):5103–5151, October 2013. doi:10.1090/S0002-9947-2013-05766-2.
  • [41] Dietrich Kuske and Markus Lohrey. First-order and counting theories of omega-automatic structures. In Luca Aceto and Anna Ingólfsdóttir, editors, Foundations of Software Science and Computation Structures, 9th International Conference, FOSSACS 2006, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2006, Vienna, Austria, March 25-31, 2006, Proceedings, volume 3921 of Lecture Notes in Computer Science, pages 322–336, Berlin, Heidelberg, 2006. Springer. doi:10.1007/11690634_22.
  • [42] Yuri Vladimirovich Matiyasevich. Hilbert’s tenth problem. MIT press, 1993.
  • [43] André Nies and Pavel Semukhin. Finite automata presentable abelian groups. In Sergei N. Artemov and Anil Nerode, editors, Logical Foundations of Computer Science, pages 422–436, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. doi:10.1007/978-3-540-72734-7_29.
  • [44] André Nies, Frank Stephan, Markus Lohrey, Pavel Semukhin, Dimitry Berdinsky, and et al. Personal communications (over many years).
  • [45] André Nies and Richard M. Thomas. Fa-presentable groups and rings. Journal of Algebra, 320(2):569–585, 2008. Computational Algebra. doi:10.1016/j.jalgebra.2007.04.015.
  • [46] Graham P. Oliver and Richard M. Thomas. Automatic presentations for finitely generated groups. In Volker Diekert and Bruno Durand, editors, STACS 2005, 22nd Annual Symposium on Theoretical Aspects of Computer Science, Stuttgart, Germany, February 24-26, 2005, Proceedings, volume 3404 of Lecture Notes in Computer Science, pages 693–704. Springer, 2005. doi:10.1007/978-3-540-31856-9_57.
  • [47] Michael O Rabin. Decidability of second-order theories and automata on infinite trees. Transactions of the american Mathematical Society, 141:1–35, 1969.
  • [48] Downey Rodney and Melnikov Alexander. Computable Structure Theory: A Unified Approach. Springer, 2024.
  • [49] Sasha Rubin. Automatic structures. PhD thesis, University of Auckland New Zealand, 2004.
  • [50] Sasha Rubin. Automata presenting structures: A survey of the finite string case. The Bulletin of Symbolic Logic, 14(2):169–209, 2008. doi:10.2178/bsl/1208442827.
  • [51] Dana Scott. Logic with denumerably long formulas and finite strings of quantifiers. Journal of Symbolic Logic, 36(1):1104–329, 1965. doi:10.2307/2271545.
  • [52] Frank Stephan. Automatic structures — recent results and open questions. Journal of Physics: Conference Series, 622(1):012013, May 2015. doi:10.1088/1742-6596/622/1/012013.
  • [53] Todor Tsankov. The additive group of the rationals does not have an automatic presentation. Journal of Symbolic Logic, 76(4):1341–1351, 2011. doi:10.2178/jsl/1318338853.