FC-Datalog as a Framework for Efficient String Querying
Abstract
Core spanners are a class of document spanners that capture the core functionality of IBMโs AQL. ย is a logic on strings built around word equations that when extended with constraints for regular languages can be seen as a logic for core spanners. The recently introduced - extends with recursion, which allows us to define recursive relations for core spanners. Additionally, as - capturesย , it is also a tractable version of on strings. This presents an opportunity for optimization.
We propose a series of - fragments with desirable properties in terms of complexity of model checking, expressive power, and efficiency of checking membership in the fragment. This leads to a range of fragments that all capture , which we further restrict to obtain linear combined complexity. This gives us a framework to tailor fragments for particular applications. To showcase this, we simulate deterministic regex in a tailored fragment of -.
Keywords and phrases:
Information extraction, word equations, datalog, document spanners, regexCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Logic and databasesAcknowledgements:
The authors would like to thank the anonymous reviewers of the current and previous versions for their detailed and helpful feedback.Funding:
This work was supported by EPSRC grant EP/T033762/1.Supplementary Material:
Data Accessibility: No data was generated or analyzed during this study.Editors:
Sudeepa Roy and Ahmet KaraSeries and Publisher:

1 Introduction
As a vast amount of valuable information is stored in unstructured textual data, the operation of extracting structured information from such data is crucial. This operation is the classical task known as Information Extraction (IE), and has applications from healthcareย (seeย e.g.ย [39]) to social media analytics (see e.g. [5]). A popular approach to this task is the rule-based technique, which can be understood as querying a text as one queries a relational database. Document spanners are a framework for rule-based information extraction. We consider a recursive model connected to document spanners called -, which is based on the logic on strings and the query language . Thus, we first discuss the latter two, and the connection to document spanners, before moving to the former.
FC.
The theory of concatenation (short: ) is a logic on strings with the infinite universe . Introduced by Freydenberger and Peterfreundย [16], is a finite model version of which has a finite universe, a single word and all of its factors (contiguous subwords). As a result of this restriction, has decidable model checking and evaluation (see [16]). is built on word equations, equations of the form , where variables and represent words over a finite alphabet . As a result, in we can reason directly over factors rather than intervals of positions as is the case for other logics on strings such as monadic second order logic () over a linear order. Furthermore, in we can compare factors of unbounded length, something which is not possible in (see [16] for details). Word equations themselves are a natural way of expressing many typical properties of a string such as containing a square or being imprimitive (see e.g. [24]), and have previously been used for data management in other areas such as graph databases (see [6]).
Document Spanners and FC.
Introduced by Fagin, Kimelfeld, Reiss and Vansummerenย [12], document spanners (or spanners) are a rule-based framework for Information Extraction. Spanners were introduced to capture the core functionality of AQL, a query language used for IBMโs SystemT. Informally, spanners are functions that take a text document as input and output a relation over intervals (called spans) from the document. Intuitively, primitive extractors (which are commonly regular expressions with capture variables called regex formulas) extract relations of spans, and these are then combined using relational algebra.
Many works on spanners, particularly in the area of enumeration (see e.g. [3, 13, 35]), have been concerned with the subclass of regular spanners, which are regex formulas extended with projection (), union () and natural join (). However, [12] showed that this subclass cannot express more than recognizable relations. The full class of spanners introduced by Fagin et al. [12] are called the core spanners as these achieve the original motivation of capturing the core functionality of AQL. Core spanners extend the regular spanners with string equality (denoted ), an operation necessary to perform fundamental tasks such as reading multiple occurrences of a string. This added expressibility comes at the cost of reduced efficiency (see e.g. [14, 15]). Further extending the core spanners with set difference gives the class of generalized core spanners.
The logic has a tight connection to core spanners. In particular, the extension , which extends with constraints that decide membership of regular languages, captures the expressive power of generalized core spanners, and the existential-positive fragment - captures the expressive power of core spanners. Furthermore, there are polynomial time conversions between and generalized core spanners, and - and coreย spanners (see Section 5.2 of [16]). While spanners reason over intervals of positions, reasons directly over factors. When dealing with factors, unlike with intervals of positions, the default is not to distinguish between duplicates. However, simulating different intervals containing the same factor is easily done: we can simply store in addition to the factor, the prefix preceding it. On the other hand, eliminating duplicates when working with intervals, such as in spanners, is not so easily achieved. Due to the connection between the two models, we can use to gain insights into spanners; previous examples of work on that has produced results for spanners are [18] for tractability and [38] for inexpressibility.
Datalog.
A query language for relational databases, was introduced to perform operations that were not possible in earlier database languages such as graph transitive closure (see e.g. [1]). A program has a database of prepopulated extensional relations and a set of recursive rules that define new intensional relations. Semi-positive , which allows negation for atoms with extensional relation symbols, captures on ordered structures (see e.g. [28]). Linear permits at most one atom with an intensional relation symbol in the body of every rule, and semi-positive linear - captures on ordered structures (see e.g. [20]). A more general definition of linear permits, in the body of every rule, at most one atom with an intensional relation symbol mutually recursive with the head relation symbol (see e.g [1]). We can evaluate body atoms that have other intensional relation symbols as subroutines, and so this extended definition does not affect the complexity. Linear also captures how recursion is done in SQLย (see [30]).
The combined complexity of model checking is -complete, even if the input database is empty, the universe is made up of only two elements, and the program has only a single rule (see [10, 20]). For linear it is -complete, again even with an empty input database, a two-element universe, and a single rule (see [20]).
An earlier approach to adapting for querying strings is Sequence (seeย [7]), but this has an undecidable model checking problem. Furthermore, in the spanner setting Peterfreund, ten Cate, Fagin and Kimelfeld [33] introduced , over regex formulas. was motivated by the SystemT developersโ interest in recursion (for example, to implement context free grammars for natural language processing), and captures the complexity class . As introduced in [32], generalizes the spanner and relational model and has recently been implemented in [29]. with stratified negation and restricted to string extensional relations also captures (see Section 6 of [33]).
FC-Datalog.
Together with , Freydenberger and Peterfreundย [16] also introduced -, which extends existential-positive (-) with recursion analogously to how extends existential-positive . It is worth pointing out that - is able to express the inequality of two strings (see e.g. Example 5.3 in [14]). As in , we have the finite universe of a single word and all of its factors. - has word equations instead of extensional relations. That is, - atoms are word equations or relations. In - we adopt the fixedย point semantics.
Example 1.1.
From Theorem 4.11 of [16], - captures the complexity class . When considering efficiency, we are primarily interested in model checking, which relates practically to deciding if a tuple is in a relation. We can see - as a language for expressing relations that can be used in spanner selections. Because spanners reason over intervals of positions, expressing relations can become cumbersome as a relation holds such intervals, including all those that represent the same factor. In contrast, we can neatly express relations in - as we reason directly over factors and so a relation holds the factors themselves.
Example 1.2.
In -, we would express a relation that contains all factors that are squares with . In core spanners, we would express a relation that contains the positions of all factors that are squares with . Seeย [12] for the full definition of core spanners.
Parallel to this, where model checking Sequence is undecidable, the same problem for - is in , and so we can also see - as a tractable recursive query language for strings, independent of the connection to spanners. We aim to identify techniques that make recursion less expensive. We thus look to define restrictions that lead to more efficient fragments of - which also retain other desirable properties.
We focus on the model checking problem primarily through two different lenses: data complexity and combined complexity. In many cases for query languages it is reasonable to use data complexity as the queries are often significantly smaller than the data. In text-based settings on the other hand, features such as regular expressions can make queries large. Consequently, combined complexity remains an important consideration.
Deterministic Regex.
As the regular languages are often not enough to express what is required in practice, almost all modern programming languages (such as e.g. PERL, Python, and Java) do not implement only classical regular expressions, as introduced by Kleeneย [26], but regex, regular expressions extended with back-references. These are operators that match a repetition of a previously matched string, and whilst they do increase expressibility, they also lead to intractability of membership (see [2]). Freydenberger and Schmid [17] combined regex with the notion of determinism to define , the class of deterministic regex, to obtain a tractable class of regex with more expressive power than deterministic regular expressions.
Other Related Models.
As well as -, there also exist other related models that capture . These include positive Rangeย Concatenation Grammarsย ()ย (see e.g.ย [8]), and Hereditaryย Elementaryย Formalย Systemsย ()ย (see e.g. [31]), which do not have finite model semantics. The key difference of these models to - is in -โs use of word equations, which are not present in other formalisms and are crucial for our restrictions that lead to efficient fragments.
Contributions of this Paper.
The only previously known complexity result for - is that it captures . We first show that combined complexity of - is -complete, and then perform an evaluation of different restrictions on - to identify fragments: with more efficient complexity of model checking, for both data and combined complexity; that do not overly sacrifice expressive power; and where membership in the fragment can be checked efficiently. As the semi-positive linear fragment of classical captures (on ordered structures), our first restriction is to adapt linearity to -. We show that linear - also captures and has -complete combined complexity. Our second restriction is to remove nondeterminism. Here, we define deterministic linear - which captures . But, checking whether a linear program is deterministic is as hard as satisfiability for word equations, a problem which is known to be -hardย (see [4, 27]) and in nondeterministic linear spaceย (see [23]). Therefore, we employ another restriction that we call oneย letterย lookahead () on the permitted word equations. Deterministic () - captures and checking if an program is deterministic can be done in polynomial time, but its combined complexity is still -complete. We thus make a final restriction that we call strictly decreasingย () and define - -, which has linear combined complexity.
We therefore establish the endpoints of a range of fragments that all capture ; at one end is deterministic linear - at at the other is -. We establish a trade-off in this range between how rich the fragmentโs syntax is and how easy it is to check membership in the fragment, although fully mapping this range is left for future work. Furthermore, we show that we can obtain an - fragment with linear combined complexity, namely - -. Consequently, we have paved the way to design tailored fragments for particular applications.
We then explain how we can view - programs from our range as generalized nondeterministic and deterministic multi-headed two-way finite automata, which are equivalent to nondeterministic and deterministic Turing machines, respectively (seeย [25]). - fragments from our range allow for more flexibility than these automata models, for example -, tailored to simulate , can be viewed as a generalization that permits performing nonregular string computations in the transitions.
Where in [17], to check if a regex matches a word we have to construct a technically involved automata model, we show that we can model this simply in - -. We also show how this tailored fragment allows us to conveniently and naturally write programs that are more concise. - permits additional deterministic components, but despite this maintains all of the desirable properties of -: it captures , determinism can be checked in polynomial time, and its strictly decreasing variant - - has linear combined complexity. Finally, we show that as we can simulate , another example of deterministic components that can be added to these tailored fragments are constraints that match .
2 Preliminaries
Let and . We denote the empty set by and the cardinality of a set by . Let be the set difference of and , and let be the power set of . Let be some alphabet. For any two , we use , or simply , for the concatenation of and . Let denote the length of . Let be a finite alphabet that we call terminal symbols (or just terminals). We call any a word (or string). We use to denote the empty word and let . A factor of a word is a word such that there exist with . In literature, a factor is sometimes called a contiguous subword.
For a tuple , let its size be the number of components in and denote that is a component of . We refer to tuples where as singletons and omit the brackets in this case. A relation is a set of tuples of the same size, and is represented by a relation symbol.
Let be a countably infinite set of variables where . A pattern is a word . We define ) as the set of variables in . For two alphabets and , aย homomorphism is a function where = holds for all . A patternย substitution is a homomorphism where for all . We usually refer to a pattern substitution as simply a substitution. We denote the image of a pattern under a substitution by . If for some variable , we say is erasing. For a tuple of patterns and a substitution , let .
For two patterns , an equation of the form is called a word equation. Under a substitution , the word equation holds if , and we then say is a solution of . If there exists a solution for , then we say is satisfiable. We say a conjunction of two word equations is satisfiable if there exists a substitution that is a solution to both and . We say two word equations and contradict each other if their conjunction is not satisfiable. We say a pattern equation is a word equation of the form , where and . For every pattern equation , we define . The expressive power of conjunctions of pattern equations is the same as that for conjunctions of word equations as we can simulate a word equation , for , using two pattern equations and for a new variable .
FCโDatalog.
An - atom is either a pattern equation, or a so-called relation atom of the form where is a relation symbol that has an arity and . Without loss of generality, we can assume that for every pattern equation atom , the variable does not occur in because such equations reduce to trivial scenarios, as in Lemma 3.1 of [37]. For a relation atom , we define )ย =ย {}. For now, we assume each relation symbol has a corresponding relation . How relations are updated is specified in the following semantics.
Let be a word and let be a substitution. For a pattern equation atom , we say if and both and are factors of . For a relation atom , we say if and are factors of . We represent the word that defines the universe with the distinguished universe variable , and use this as an input to an - program.
Let be a relational vocabulary (see e.g. [11, 28] for a definition of and vocabularies). A conjunctive query is an formula of the form , where , each is a relation symbol in , each is a -term, and . We usually write this as where is the tuple of free variables. We call the head of and the body of . If there are no free variables then is Boolean. We call the set of conjunctive queries . We also use the output relation symbol Ans in -.
Definition 2.1.
An - program is a 3-tuple , , , where:
-
is the universe variable,
-
is a finite set of relation symbols, where each has an arity ,
-
is a finite set of rules of the form for some , some , where for every is an - atom, and for every occurs in some .
Each element of can be intuitively seen as a conjunctive query over - atoms and as for , when , the program is Boolean. For brevity, just is used to represent the whole tuple if and are clear from context. For a relation symbol , let be the subset of rules with head relation symbol . For an - rule we define as the conjunction of all the pattern equations in .
Example 2.2.
For the - program where are the rules defined in Example 1.1, we have the two subsets: and . Let . Then .
Let ) be an - rule and let . For a word , a -substitution is a substitution where and is a factor of for each . Using only -substitutions ensures that the universe is restricted to and its factors. For the rule and -substitution , we say if for all we have . As we only consider the finite universe setting here, we are thus only considering -substitutions, and so refer to these as just substitutions for brevity.
We treat an - program as implicitly defining a vocabulary and define corresponding structures. An - structure consists of a fixed universe and all its factors, and an interpretation function that maps every relation symbol to an interpretationย . For convenience, we also refer to as .
A program and a word define a structure incrementally. First, all are initialized to . Then, for each rule ) and , for every -substitutionย where , for all , add toย . We repeat this until all have stabilized. Then is the content of the Ans relation. This โfilling upโ of relations mirrors the fixed point semantics of classical .
The defined functionย that maps to -ary tuples over factors of is well-defined (thus the fixed point is unique) and the โfilling upโ process terminates for every givenย . If is Boolean, is either or . We interpret these as and (resp.) and use this to define a language .
Example 2.3.
The - program where is:
defines the language . Note that without the three rules that have in the head, we would define the language .
We look at the model checking problem for -: given a Boolean - program and a word , is ? We consider three perspectives: data complexity, where the program is fixed and only the word is considered input, expression complexity, where the word is fixed and only the program is considered input, and combined complexity, where both the word and program are considered input (for details, see [28]).
We call a subset of - programs a fragment. We say that a fragment captures a complexity class if . Note that, following directly from the definitions, if captures , then the data complexity of is . The only complexity result that is known for - is that - capturesย (see Theorem 4.11 of [16]), however, there are results for other versions of , such as those discussed in Section 1.
3 Efficient FC-Datalog
As is often not considered efficient for data complexity, this presents an opportunity for optimization. We show that this is also the case for combined complexity.
Theorem 3.1.
Combined complexity of - is -complete.
It is straightforward to see that as our universe is finite, the number of tuples we can add to the relations is exponential. We therefore have a naive evaluation algorithm of a loop with an exponential bound on the number of iterations, and an exponential number of steps in each iteration. We can also straightforwardly reduce classical evaluation to show -hardness. Our main goal in this section is to identify fragments with lower data and combined complexity, and where membership in the fragment can be easily checked.
3.1 Linearity
There exists a fragment of classical that captures the complexity class on ordered structures (see e.g. [20]), namely the fragment of all semi-positive linear programs. As discussed in Section 1, there is a more general definition for linearity (see e.g. [1]) that does not affect complexity. Here we translate the more general linearity restriction from the relational setting to the text setting.
We define a relation over relation symbols and if there exists a rule where is the head relation symbol and appears in the body. We denote the transitive closure of by , and two relation symbols and are mutually recursive if and .
Definition 3.2.
Let be an - program with rule set . A rule is linear with respect to if at most one atom in the body of has a relation symbol with which the head relation symbol of is mutually recursive. If every is linear, then is linear.
Checking if a given - program is linear can be done in polynomial time with respect to the programโs number of rules, as it amounts to determining which pairs of relation symbols are mutually recursive, which is a syntactic criterion.
Example 3.3.
The - programs given in Example 1.1 and Example 2.3 are both linear - programs. An example - program that is not linear is the following program that retrieves all even length factors of the input word:
and a rule for each . This is not linear as Ans is mutually recursive with Ans, and the top rule has Ans in the head and two occurrences of Ans in the body.
While unrestricted - captures , the restriction to linear - has substantially improved data complexity (Theoremย 3.4) as well as combined complexity (Theoremย 3.5).
Theorem 3.4.
Linear - captures .
We capture as we can simulate multi-headed two-way nondeterministic finite automata, which are equivalent to nondeterministic Turing machines with logarithmic space (see [25]). We will further discuss the connection to automata in Sectionย 3.5.
Theorem 3.5.
Combined and expression complexity of linear - are -complete.
On the other hand, we would like to lower both data complexity further than and combined complexity further than . Furthermore, -completeness occurs even on a single-character input. As such, we look for more efficient fragments.
3.2 Determinism
As linear - captures , it is natural to look at causes of nondeterminism and see if there is a corresponding fragment that captures . For model checking, we evaluate - programs top-down.
Example 3.6.
We show a top-down evaluation of the program in Example 2.3 on the input word . The only rule we can apply first is . As , we see that , and we pass this into the relation . We can then only apply the rule . As , we have and we recurse on this value of . We then apply and recurse on . We can then apply , and as this holds, we accept.
Based on this top-down evaluation model, we define two categories of variables.
Definition 3.7.
Let be a linear - rule where is mutually recursive with , and is either a pattern equation or a relation atom with a relation symbol that is not mutually recursive with , for . We say โs top variables are and โs bottom variables are .
In top-down evaluation, a ruleโs top variables contain values that were passed down from a preceding ruleโs evaluation, and its bottom variables contain the values that are passed down. A variable can be both a top and bottom variable.
Example 3.8.
Let be the linear - rule from Exampleย 2.3. Then and .
When evaluating a linear program, nondeterminism can occur in two ways: in choosing the values of the variables when processing a rule, which we call local nondeterminism, and in choosing which rule to process, which we call global nondeterminism. In order to remove all nondeterminism, we must ensure every program has both local and global determinism.
Let be the input word and let be a linear - rule. We define the relationย to be all pairs , for all substitutions such that . For some , we call a top tuple and a bottom tuple. Recall that as that we are working under finite model semantics, each value is a factor of the word . We define two criteria for determinism.
Definition 3.9.
A linear - program with rule set is locally deterministic if for each , the relation is a partial function. is globally deterministic if for every relation symbol , for all pairs of distinct rules , there is no top tuple such that in and in , for some bottom tuples and .
If is a partial function for some rule , then top-down evaluation of is locally deterministic as for every top tuple there is at most one bottom tuple. If a top tuple is a valid input for only one rule in , for โs head relation symbol , then we have global determinism as is the only rule we can process.
Definition 3.10.
Deterministicย linearย - is the set of linear - programs that are locally and globally deterministic.
Our condition is sufficiently strong as we precisely capture .
Theorem 3.11.
Deterministic linear - captures .
We capture as we can simulate multi-headed two-way deterministic finite automata, which are equivalent to deterministic Turing machines with logarithmic space (see [25]). Unfortunately, where linearity for - can be checked in polynomial time, checking determinism for linear - is considerably more expensive, as the criteria we have to check is semantic rather than syntactic. A problemย is word equations-hard if there exists a polynomial time reduction from word equation satisfiability toย .
Proposition 3.12.
Checking local or global determinism of linear - is word equations-hard.
Word equation satisfiability is -hard [4, 27] and in nondeterministic linear spaceย [23]. Closing this gap is a longstanding open problem. Thus, although we have reduced the complexity, we have lost efficient checking of membership in the fragment.
We now show that, in contrast to regular spanners, under standard complexity theoretic assumptions we cannot expect to have constant delay enumeration algorithms in our setting. A constant delay enumeration algorithm has a preprocessing phase, which often runs in linear time, and an enumeration phase that outputs solutions one by one, with constant time between any consecutive outputted solutions (see e.g. [36]).
Proposition 3.13.
If there exists a constant delay enumeration algorithm with polynomial preprocessing for deterministic linear -, then .
As such, we do not look further into constant delay enumeration, and instead focus on identifying a fragment that has both reduced complexity and efficient membership checking, without compromising on expressive power.
3.3 One Letter Lookahead
In this subsection, we introduce a second fragment that captures , but where we can check membership in the fragment in polynomial time. This fragment has a severely restricted syntax but can still simulate multi-headed two-way deterministic finite automata, demonstrating how little of - is required to retain this level of expressibility.
Definition 3.14.
Let be a linear - rule. Let , , and . The rule is a one letter lookahead () - rule if each pattern equation has either of the below forms:
-
or where .
-
or ย ย (deleting a letter from a top variable).
-
or ย ย (adding a letter to a top variable).
-
for where and does not occur elsewhere in (matching the next letter after a top variable in the input word).
-
for where and does not occur elsewhere in (matching the first letter before a top variable in the input word).
We call an equation or or where an - pattern equation. Otherwise, we call an equation where is on the right side of or a right pattern equation, and an equation where is on the left side of or a left pattern equation.
Definition 3.15.
Let be a linear - program with rule set . We say is a oneย letterย lookahead () - program if all are - rules, and for every relation symbol , for every rule , every variable is not used in both left and right pattern equations. An - program is deterministic (a - program) if it is both locally and globally deterministic.
As such, in -, for every rule we can only use equations of the form for if or there exists a equation of the form or or in , for . We call such a program guarded.
Example 3.16.
The linear - program given in Example 2.3 is not a ย - program as the rule contains , which does not fit any of the permitted forms of pattern equations for an - program.
The simplest - program the authors could find that models the language of squares given in Example 2.3 is the one that models the multi-headed two-way DFA; this program first identifies the middle of the string and processes the two halves letter by letter. We now see that despite restricting the programs substantially, our new fragment - retains the expressive power of deterministic linear -.
Theorem 3.17.
- captures .
To check if a program is an - program is straightforward. Checking if an - program is deterministic is easier than in the general case, as the pattern equations are restricted to only match one letter at a time. When checking both local and global determinism, we only need to consider the case where rules do not contain contradicting pattern equations; if we have a rule with two pattern equations and that contradict each other, then as there are no solutions to , this rule can never be applied. To check if a program is locally deterministic, we check that for every rule without pattern equations that contradict each other, every appears in some pattern equation in or , and every appears in some pattern equation in or . If so, then is a partial function. To check if a program is globally deterministic, we use profiles for its rules.
Definition 3.18.
Let be an - rule without pattern equations that contradict each other. We define a profile for as a function where:
For , we say and are in conflict if and , and . Let and be linear - rules with the same head relation symbol, with respective profiles and . Let . Note that . We say and are in conflict if there exists some where is in conflict with .
The function represents how each is processed in . In -, all we need to consider is the leftmost or rightmost letter of each , depending on whether is used in left or right pattern equations for rules with this head relation symbol.
Lemma 3.19.
A guarded - program is globally deterministic if and only if for every relation symbol , for every pair of distinct rules , we have that is in conflict with .
This allows us to check if a given program is deterministic efficiently.
Proposition 3.20.
Local and global determinism for - can be decided in polynomial time.
We thus have a fragment where we have reduced data complexity to , and without losing efficient checking of membership in the fragment. However, the combined complexity for - is the same as for linear -.
Theorem 3.21.
Combined complexity of - is -complete.
3.4 Strictly Decreasing
Our next goal is more efficient combined complexity. We introduce a further restriction called strictly decreasing, and show that this leads to linear combined complexity.
Definition 3.22.
Let be a - program with rule set , and let be the rules containing both an atom with a relation symbol mutually recursive with the head relation symbol, and at least one pattern equation. Let be the input word. We say is strictly decreasing () if for every ruleย that has head relation symbol :
-
1.
there exist and such that occurs in a pattern equation with and for all substitutions such that we have , and
-
2.
if there exists a relation symbol in the body of that is mutually recursive with , for every ruleย with head relation symbol , condition 1 holds for some and some , and for all substitutions such that and all substitutions such that .
We now see that this restriction significantly improves the combined complexity. In fact, if the maximum relation symbol arity is fixed (or assumed to be much smaller than ), then this is linear. Furthermore, the preprocessing is not dependent on the word .
Theorem 3.23.
Given a word and an - - program with relation symbols and maximum relation symbol arity , we can decide in time after preprocessing.
In this section, we have defined the endpoints of an infinite range of fragments that all capture . At one end, - has just enough syntax to simulate multi-headed two-way deterministic finite automata, but has easy checking of determinism. At the other end, deterministic linear - has a richer syntax but has more difficult checking of determinism. From this range, there is therefore the opportunity to extend - to make writing programs more natural and convenient. We will demonstrate such a situation in Sectionย 4 when designing an appropriate fragment to model deterministic regex. Furthermore, although the fragments in this range do not reduce combined complexity any more than is the case for linear -, when we add the extra dimension of strictly decreasing, we can reduce the combined complexity to linear time.
3.5 FC-Datalog as Generalized Automata
As -, by design, permits just enough to simulate multi-headed two-way deterministic finite automata, we can naturally see a - program as a generalization of such an automaton: every relation acts as a state, every rule acts as a transition, and every string variable acts a head which can be moved, added and deleted (as the number of heads can change per state). In each transition we read one letter and move, add, or delete the heads accordingly.
The fragments of - discussed in this paper can be seen as further generalizations. As the syntax grows, the connection to automata becomes less natural and checking determinism becomes harder. For example, Section 4 introduces - which allows word equations such as . Instead of just moving a head by one letter, this is an operation that either splits a memory in half or doubles it. We can then see a - program as an generalized multi-headed two-way DFA where heads can read words rather than letters and that can perform nonregular string computations in the transitions.
We can also see an - program as a generalized multi-headed two-way NFA, and a linear - program as a further generalization. There, reading letters at head positions is replaced with a declarative programming language that checks if the rule applies and computes the operations. The increasing gap to traditional automata is reflected in the fact that determinism is now expensive to verify. Finally, for -, we can understand every relation symbol in the body of a rule as a call of a subroutine, which splits the automaton into parallel copies, each of which must terminate. At this general level, however, the gap to traditional automata is so large the connection is no longer natural.
4 Applying the Framework: Simulating DRX
In this section, we tailor a fragment from our range to a specific application. We apply our model to simulate deterministic regex, introduced by Freydenberger and Schmidย [17].
We define regular expressions as usual (see e.g. [1]) and extend these with back-references to define regex. For and a regular expression , we thus add the expressions and to our syntax. The expression matches and saves the string that is matched by in the memory . All further occurrences of are recalls of memoryย , which we match using the content of the variable that was saved earlier. For readability, we use for concatenation.
Example 4.1.
Let . Then matches all words where .
Every regular expression can be converted into a finite automaton using the classical construction from Glushkov [19]. If the result of this construction is deterministic then the regular expression is deterministic. Deterministic regular expressions define a strict subclass of the regular languages, and have a more efficient membership problem than general regular expressions. For a deterministic regular expression and a word , we can decide membership in ) (see [9, 34]) or (see [21]).
Freydenberger and Schmid [17] combined regex and deterministic regular expressions to define deterministic regex, which can define nonregular languages, and have an efficient membership problem; this can be decided in for a word and a deterministic regex with distinct variables and total occurrences of terminal symbols or variable references (see Theorem 5 of [17]). Similarly to the Glushkov [19] construction, every regex has a corresponding memory finite automaton with trap state () (where the trap state handles memory recall failures), and is deterministic if is deterministic (seeย [17] for details). We call the class of all deterministic regex .
Example 4.2.
Let be the regex defined in Example 4.1, and let . Then and . Then and , as is deterministic and is not deterministic. Intuitively, this is because has only one choice of when to stop matching the first , whereas does not.
As in [17], we can assume w.โl.โo.โg. that a regex does not recall empty memories, does not start to save into a memory that is already being saved to, and does not reset a memory to its initial value. We first show that for any , we can express in - -, but with a large number of rules. We can parameterize such a program using the number of memories, terminal symbols and memory recalls in . Here we use -semantics111In -semantics we treat everything not initialized as (see e.g. Section 8.2.1 in [17])..
Theorem 4.3.
Let have memories. Let the total number of terminal symbols and memory recalls in be . We can express with an - - programย that has at most relation symbols and at most rules.
The program that is the result of our construction requires a rule for each pair , for and . This is because the syntax permits matching only one letter at a time, and so the program requires many rules to read memories letter by letter. We can achieve this with a much more concise program if we relax the syntax slightly, to what we call -. This sits between - and deterministic linear -, and it retains the polynomial time checking of determinism. To define this, we introduce the concept of a symbol being uniquely defined for top-down evaluation.
Definition 4.4.
Let be an - rule. We inductively define uniquely defined symbols: As base rules, the universe variable , every and every are uniquely defined. Then, if we have some pattern equationย where and exactly one variable is not uniquely defined, then becomes uniquely defined.
Example 3.6 illustrates how top variables are uniquely defined for top-down evaluation.
Example 4.5.
Let be a linear - rule where and . From the base rules, and are uniquely defined. In the first iteration, as all are uniquely defined, is uniquely defined. In the second iteration, as all are uniquely defined, is uniquely defined.
In deterministic linear and -, by Definition 3.9, we ensure local determinism by requiring the relation to be a partial function for every rule , implicitly requiring every variable to be uniquely defined. We ensure local determinism in - by reasoning directly over uniquely defined variables.
Definition 4.6.
Let be a linear - program with rule set . We say is a - program if it is globally deterministic and for every rule , every variable that occurs in is uniquely defined.
Example 4.7.
We define for the same way as for (Definitionย 3.22). - - generalizes - -. We now show that despite being extensions, - retains the low data complexity (Theoremย 4.8) and the efficient checking of determinism (Proposition 4.9) of -, and - - retains the low combined complexity of - - (Theoremย 4.8).
Theorem 4.8.
- captures , and given a word and an - - program with relation symbols and maximum relation symbol arity , we can decide in time after preprocessing.
Proposition 4.9.
Membership of - can be decided in polynomial time.
Where a - program is required to match letter-by-letter, this is no longer the case for -, and a consequence of this is that we can simulate a deterministic regex with a much more concise - - program.
Theorem 4.10.
Let have memories. Let the total number of terminal symbols and memory recalls in be . We can express with an - - programย that has at most relation symbols and at most rules.
As the following example shows, the extra flexibility permitted in the syntax of - allows us to write programs much more conveniently than for -.
Example 4.11.
In -, we can process a memory , with a rule:
In -, to process a memory , we use a new relation symbol and rules:
and for every a rule:
We have thus demonstrated the design of a tailored fragment in our range spanned by and deterministic linear -, in this case by adding word equations with uniquely defined variables. The and - fragments retain the desirable properties of the restrictive and - fragments. As discussed in Sectionย 3.5, we can see - as a generalization of multi-headed DFAs which is then not restricted to left-to-right parsing. As the next example shows, it can express context-free languages which cannot be recognized by a DPDA (see e.g [22]).
Example 4.12.
The palindrome language can be expressed deterministically by the - program with the rules ; and the rules ; for each . This evaluates in steps, which is linear.
Furthermore, as we can simulate , another feature that can be added to these tailored fragments are atoms that match , which we can solve as a subroutine. We say a -constraint is an expression for and a deterministic regex , that denotes is mapped to an element and is a factor of the input word .
To stay in , we must ensure programs remain locally and globally deterministic. As -constrains only check if a variable matches a deterministic regex, we can decide local determinism as we would without the -constraints. To retain global determinism (Definition 3.9), if using deterministic regexes in multiple rules with the same head relation symbol, we must ensure that always at most one rule can accept. We thus need to decide if . Unfortunately, intersection-emptiness for is undecidable (see Theorem 9 of [17]). We say is variable-star-free if each of its sub-regexes under a Kleene-star or Kleene-plus do not contain any variable operations. Intersection-emptiness for variable-star-free is at least word equations-hard (see Proposition 8 of [17]). Thus, adding -constraints means we lose efficient checking of determinism, another example of the trade-off between richer syntax and efficient determinism checking.
We can keep efficient determinism checking if we limit where we include -constraints. If for some relation symbol , global determinism for is inherent. If we limit inclusions of -constraints to rules where , we can verify determinism by verifying every -constraint is in such rules. We can decide this in polynomial time. Combining this with Propositionย 4.9, we can check the whole programโs determinism in polynomial time.
Example 4.13.
The - program :
defines the language , and , where is the only head relation symbol that has rules containing -constraints.
ย Remark 4.14.
Another extension of is , which captures the expressive power of generalized core spanners and also has date complexity. However, does not capture , as it cannot express the language (see [16]). We can naturally express this language in -:
Example 4.15.
We can express with the - program that has the rules: and .
5 Conclusions and Future Work
Freydenberger and Peterfreund [16] recently proposed - as an extension of the logic with recursion. - captures the complexity class and can be seen as a language for expressing relations that can be used in spanner selections. Furthermore, independent of the spanner connection, - can also be seen as a version of on strings with a decidable model checking problem, where the same problem for a previous approach towards on strings is undecidable. We first showed that combined complexity of - is -complete, presenting an opportunity for optimization of model checking. In this paper, we identified fragments of - with an efficient model checking problem for both data and combined complexity by performing an analysis of four restrictions: linearity, determinism, one letter lookahead and strictly decreasing.
In Section 3.1 we proposed linear -, a fragment that captures . In Sectionย 3.2 we identified and removed nondeterminism. Thus, deterministic linear - captures . However, this restriction cannot be checked efficiently.
In Section 3.3 we imposed the oneย letter lookahead restriction on a programโs word equations to obtain deterministic () - that has both desirable properties: it captures and determinism can be checked in polynomial time. We also showed that the combined complexity of model checking for both linear - and - is -complete. Therefore, we added the further restriction of strictly decreasing (), and showed that - - has linear combined complexity. We thus established the endpoints of a range of - fragments that all capture , and showed how we can restrict this further to reduce combined complexity to linear time. Hence, we have paved the way to construct further fragments which can be tailored for particular applications.
We illustrated tailoring a fragment in Section 4, where we constructed -. This allows us to straightforwardly model deterministic regex without the need for a technically involved automata model. Yet, as for -: we capture , determinism can be checked in polynomial time, and strictly decreasing programs have linear combined complexity. We then showed how deterministic regex can be used as atoms, and how we can restrict where they are used so as to not compromise on these desirable properties.
As is closed under complement, we can add stratified negation without affecting the combined complexity. Other convenient additions could include predicates that express properties of a string such as โis a letterโ, and functions that build on these to express properties such as โis the first/last letterโ. We could also add regular constraints: atoms that work in the same way as -constraints, but match a classical regular expression rather than a deterministic regex, as in . Using these or other syntactic additions, we could further map the range between and deterministic linear -, and investigate how these additions affect checking membership in the fragment. Furthermore, we could also look to find sufficient conditions to ensure membership can be easily checked.
There are structure criteria for conjunctive queries in (-) that improve efficiency such as acyclicity, as demonstrated by Freydenberger and Thompson [18]. Every - rule can be seen as a conjunctive query in . We could therefore add such structure criteria to - rules and examine how this improves the complexity of - fragments further. Another direction for research is to look at inexpressibility for these fragments. However, even for (as opposed to -), finding inexpressibility results is challenging (see [38]). Another logical next step is to use our restrictions for - to identify natural fragments in the spanner setting with desirable properties.
References
- [1] Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of Databases. Addison-Wesley, 1995. URL: http://webdam.inria.fr/Alice/.
- [2] Alfredย V. Aho. Algorithms for Finding Patterns in Strings, pages 255โ300. MIT Press, 1991.
- [3] Antoine Amarilli, Pierre Bourhis, Stefan Mengel, and Matthias Niewerth. Constant-Delay Enumeration for Nondeterministic Document Spanners. SIGMOD Rec., 49(1):25โ32, 2020. doi:10.1145/3422648.3422655.
- [4] Dana Angluin. Finding Patterns Common to a Set of Strings. J. Comput. Syst. Sci., 21(1):46โ62, 1980. doi:10.1016/0022-0000(80)90041-0.
- [5] Mena Badieh Habibย Morgan and Maurice van Keulen. Information Extraction for Social Media. In Proc. SWAIE, pages 9โ16, 2014. doi:10.3115/v1/W14-6202.
- [6] Pablo Barcelรณ and Pablo Muรฑoz. Graph Logics with Rational Relations: The Role of Word Combinatorics. ACM Trans. Comput. Logic, 18(2), 2017. doi:10.1145/3070822.
- [7] Anthonyย J. Bonner and Giansalvatore Mecca. Sequences, Datalog, and Transducers. J. Comput. Syst. Sci., 57(3):234โ259, 1998. doi:10.1006/jcss.1998.1562.
- [8] Pierre Boullier. Range Concatenation Grammars, pages 269โ289. New Developments in Parsing Technology. Springer, 2004. doi:10.1007/1-4020-2295-6_13.
- [9] Anne Brรผggemann-Klein. Regular expressions into finite automata. Theoretical Computer Science, 120(2):197โ213, 1993. doi:10.1016/0304-3975(93)90287-4.
- [10] Evgeny Dantsin, Thomas Eiter, Georg Gottlob, and Andrei Voronkov. Complexity and Expressive Power of Logic Programming. ACM Computing Surveys, 33(3):374โ425, 2001. doi:10.1145/502807.502810.
- [11] Heinz-Dieter Ebbinghaus and Jรถrg Flum. Finite Model Theory. Springer Monographs in Mathematics, 2nd edition, 1999.
- [12] Ronald Fagin, Benny Kimelfeld, Frederick Reiss, and Stijn Vansummeren. Document Spanners: A Formal Approach to Information Extraction. J. ACM, 62(2), 2015. doi:10.1145/2699442.
- [13] Fernando Florenzano, Cristian Riveros, Martรญn Ugarte, Stijn Vansummeren, and Domagoj Vrgoฤ. Efficient Enumeration Algorithms for Regular Document Spanners. ACM Trans. Database Syst., 45(1), 2020. doi:10.1145/3351451.
- [14] Dominikย D. Freydenberger. A Logic for Document Spanners. Theory of Computing Systems, 63(7):1679โ1754, 2019. doi:10.1007/s00224-018-9874-1.
- [15] Dominikย D. Freydenberger and Mario Holldack. Document Spanners: From Expressive Power to Decision Problems. Theory Comput. Syst., 62(4):854โ898, 2018. doi:10.1007/S00224-017-9770-0.
- [16] Dominikย D. Freydenberger and Liat Peterfreund. The Theory of Concatenation over Finite Models. In Proc. ICALP 2021, pages 130:1โ130:17, 2021. doi:10.4230/LIPIcs.ICALP.2021.130.
- [17] Dominikย D. Freydenberger and Markusย L. Schmid. Deterministic regular expressions with back-references. J. Comput. Syst. Sci., 105:1โ39, 2019. doi:10.1016/J.JCSS.2019.04.001.
- [18] Dominikย D. Freydenberger and Samย M. Thompson. Splitting Spanner Atoms: A Tool for Acyclic Core Spanners. In Proc. ICDT 2022, pages 10:1โ10:18, 2022. doi:10.4230/LIPIcs.ICDT.2022.10.
- [19] Vย M Glushkov. The Absract Theory of Automata. Russian Mathematical Surveys, 16(5), 1961. doi:10.1070/RM1961v016n05ABEH004112.
- [20] Georg Gottlob and Christos Papadimitriou. On the complexity of single-rule datalog queries. Information and Computation, 183(1):104โ122, 2003. doi:10.1016/S0890-5401(03)00012-9.
- [21] B.ย Groz and S.ย Maneth. Efficient testing and matching of deterministic regular expressions. J. Comp. Syst. Sci., 89:372โ399, 2017. doi:10.1016/j.jcss.2017.05.013.
- [22] Johnย E. Hopcroft, Rajeev Motwani, and Jeffreyย D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 3rd edition, 2007.
- [23] Artur Jez. Word Equations in Nondeterministic Linear Space. In Proc. ICALP 2017, pages 95:1โ95:13, 2017. doi:10.4230/LIPIcs.ICALP.2017.95.
- [24] Juhani Karhumรคki, Filippo Mignosi, and Wojciech Plandowski. The expressibility of languages and relations by word equations. J. ACM, 47(3):483โ505, 2000. doi:10.1145/337244.337255.
- [25] K.ย N. King. Alternating Multihead Finite Automata. Theoretical Computer Science, 61(2):149โ174, 1988. doi:10.1016/0304-3975(88)90122-3.
- [26] S.ย C. Kleene. Representation of Events in Nerve Nets and Finite Automata, pages 3โ42. Princeton University Press, 1956. doi:10.1515/9781400882618-002.
- [27] Antoni Koscielski and Leszek Pacholski. Complexity of Makaninโs algorithm. J. ACM, 43(4):670โ684, 1996. doi:10.1145/234533.234543.
- [28] Leonid Libkin. Complexity of First-Order Logic, pages 87โ111. Elements of Finite Model Theory. Springer, 2004. doi:10.1007/978-3-662-07003-1_6.
- [29] Dean Light, Ahmad Aiashi, Mahmoud Diab, Daniel Nachmias, Stijn Vansummeren, and Benny Kimelfeld. SpannerLib: Embedding Declarative Information Extraction in an Imperative Workflow. Proc. VLDB Endow., 17(12):4281โ4284, 2024. doi:10.14778/3685800.3685855.
- [30] Carsten Lutz and Leif Sabellek. Ontology-Mediated Querying with the Description Logic EL: Trichotomy and Linear Datalog Rewritability. In IJCAI 2017, pages 1181โ1187, 2017. doi:10.24963/ijcai.2017/164.
- [31] S.ย Miyano, A.ย Shinohara, and T.ย Shinohara. Which classes of Elementary Formal Systems are polynomial-time learnable? In Proc. ALT 1992, pages 139โ150, 1992.
- [32] Yoav Nahshon, Liat Peterfreund, and Stijn Vansummeren. Incorporating information extraction in the relational database model. In Proc. WebDB 2016, 2016. doi:10.1145/2932194.2932200.
- [33] Liat Peterfreund, Balder ten Cate, Ronald Fagin, and Benny Kimelfeld. Recursive Programs for Document Spanners. In Proc. ICDT 2019, pages 13:1โ13:18, 2019. doi:10.4230/LIPIcs.ICDT.2019.13.
- [34] J.ย L. Ponty, D.ย Ziadi, and J.ย M. Champarnaud. A new quadratic algorithm to convert a regular expression into an automaton. In Automata Implementation, pages 109โ119. Springer, 1997.
- [35] Markusย L. Schmid and Nicole Schweikardt. Spanner Evaluation over SLP-Compressed Documents. In Proc. PODS 2021, pages 153โ165, 2021. doi:10.1145/3452021.3458325.
- [36] Luc Segoufin. Enumerating with constant delay the answers to a query. In Proc. ICDT 2013, pages 10โ20, 2013. doi:10.1145/2448496.2448498.
- [37] Samย M. Thompson and Dominikย D. Freydenberger. Languages Generated by Conjunctive Query Fragments of FC[REG]. In Proc. DLT 2023, pages 233โ245, 2023. doi:10.1007/978-3-031-33264-7_19.
- [38] Samย M. Thompson and Dominikย D. Freydenberger. Generalized Core Spanner Inexpressibility via Ehrenfeucht-Fraรฏssรฉ Games for FC. Proc. ACM Manag. Data, 2(2), 2024. doi:10.1145/3651143.
- [39] Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, and Hongfang Liu. Clinical information extraction applications: A literature review. Journal of Biomedical Informatics, 77:34โ49, 2018. doi:10.1016/j.jbi.2017.11.011.