Efficiently Computing the Minimum Rank of a Matrix in a Monoid of Zero-One Matrices
Abstract
A zero-one matrix is a matrix with entries from . We study monoids containing only such matrices. A finite set of zero-one matrices generating such a monoid can be seen as the matrix representation of an unambiguous finite automaton, an important generalisation of deterministic finite automata which shares many of their good properties.
Let be a finite set of zero-one matrices generating a monoid of zero-one matrices, and be the cardinality of . We study the computational complexity of computing the minimum rank of a matrix in the monoid generated by . By using linear-algebraic techniques, we show that this problem is in NC and can be solved in time. We also provide a combinatorial algorithm finding a matrix of minimum rank in time, where is the matrix multiplication exponent. As a byproduct, we show a very weak version of a generalisation of the Černý conjecture: there always exists a straight line program of size describing a product resulting in a matrix of minimum rank.
For the special case corresponding to complete DFAs (that is, for the case where all matrices have exactly one 1 in each row), the minimum rank is the size of the smallest image of the set of states under the action of a word. Our combinatorial algorithm finds a matrix of minimum rank in time in this case.
Keywords and phrases:
matrix monoids, minimum rank, unambiguous automataFunding:
Andrew Ryzhikov: Supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 852769, ARiAT).Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Formal languages and automata theory ; Computing methodologies Symbolic and algebraic manipulationAcknowledgements:
We thank the anonymous reviewers for their helpful comments that improved the presentation of the paper.Editors:
Olaf Beyersdorff, Michał Pilipczuk, Elaine Pimentel, and Nguyễn Kim ThắngSeries and Publisher:

1 Introduction
Matrix monoids are a rich and versatile object used in formal verification, program analysis, dynamical systems and weighted automata. However, many of their properties are in general undecidable. One such example is the well-studied matrix mortality problem. Given a finite set of matrices, it asks if the monoid generated by (that is, the set of all products of matrices from ) contains the zero matrix. This problem is undecidable already for integer matrices [40], and was studied for several decidable special cases, see e.g. [16, 6, 47].
Even if is a set of zero-one matrices (that is, matrices with entries in ), matrix mortality is PSPACE-complete [47]. We thus restrict our attention to the case where the whole monoid generated by consists of zero-one matrices; in this case, matrix mortality becomes decidable in polynomial time [34]. We call such monoids zero-one matrix monoids. Intuitively, when multiplying any two matrices from such a monoid, we never get as a subexpression. Zero-one matrix monoids have a rich structure while still admitting good algorithmic properties. They correspond precisely to unambiguous finite automata, and find applications in formal verification [3], variable-length codes [9] and symbolic dynamics [38]. They are also an interesting special case of finite monoids of rational matrices (studied in, e.g., [39, 28, 1, 13]), monoids of nonnegative matrices (studied in, e.g., [41, 10, 51, 22]), and, in the case where they do not contain the zero matrix, of matrix monoids with constant spectral radius [42].
In this paper, we consider a problem that can be seen as a natural generalisation of matrix mortality: given a finite set generating a zero-one monoid, find the minimum real rank of a matrix in this monoid. By the real rank of a matrix we mean the dimension of the subspace generated by its columns over the reals. Clearly, this rank is zero if and only if the monoid contains the zero matrix. The minimum real rank of a matrix in a zero-one matrix monoid is a much more tractable problem than deciding other similar properties: for example, checking if a zero-one matrix monoid contains a matrix of a given real rank was shown to be NP-hard111In fact, it is PSPACE-complete, which follows directly from [8, Theorem 3]: add a fresh state and define all yet undefined transitions to lead to this state. [24].
The goal of our paper is threefold. Firstly, we present efficient algorithms for analysing monoids of zero-one matrices and unambiguous finite automata. Secondly, to obtain these algorithms, we provide new structural properties of such monoids and automata that might be interesting on their own. Thirdly, we strengthen the connections between the areas of synchronising automata, weighted automata and matrix semigroups by transferring methods and tools between them. We also highlight open problems in the intersection of these areas.
2 Existing results and our contributions
Throughout the paper, we always assume that matrix monoids are defined by sets of generators, and all matrices are square zero-one unless stated otherwise.
Complete DFAs
An zero-one matrix with exactly one in every row can be equivalently seen as a transformation of a set of size . A set of such matrices generates a zero-one matrix monoid, and can be seen as a complete deterministic finite (semi-)automaton222In this paper, all automata are semi-automata, meaning that they do not have any initial or accepting states, and do not recognise any languages. Following the usual conventions (as in, e.g., [9]), we omit “semi-”, in particular because it would make abbreviations like DFA less recognisable. (complete DFA) . Here, is a finite alphabet whose letters correspond to the generating matrices, and is the transition function defined in such a way that for each , is the transformation of induced in the natural way by the matrix corresponding to . Thus, words over correspond to products of the generating matrices.
The rank of a word in is the size of the image of under the transformation corresponding to . Equivalently, it is the real rank of the matrix corresponding to . The rank of a complete DFA is the minimum among the ranks of all its words. This concept was studied from the perspectives of automata theory [43, 30] and the theory of transformation semigroups [48, 31]. It is the subject of the rank conjecture (called the Černý-Pin conjecture in [43]), which states that every complete DFA of rank admits a word of rank having length at most . The Černý conjecture, one of the oldest open problems in combinatorial automata theory [50], is a special case with . We refer to surveys [49, 4, 30, 50] for the vast literature on the Černý conjecture. Underlying digraphs of complete DFAs of a given rank were studied in [12, 4] in the context of the road colouring problem.
The rank of an -state complete DFA over an alphabet of size can be found in time [43, Theorem 1]. In contrast, for any fixed , the problem of checking if a complete DFA admits a word of rank is NP-hard [24]. Checking if an -state complete DFA over an alphabet of size has rank one is NL-complete [26, 50], and can be done in time [20, 49]. For each complete DFAs of rank , there exists a word of rank of length at most [37], and if , finding a word of rank one can be done in time and space [20].
Unambiguous finite automata
Generalising the case of complete DFAs, a set of zero-one matrices generating a zero-one matrix monoid can be equivalently seen as an unambiguous nondeterministic finite (semi-)automaton (UFA). Let be its set of states. To each matrix in we again associate a letter in the alphabet , and the transition relation is defined so that if and only if the entry in the matrix corresponding to is equal to one. Just as in the complete DFA case, words over naturally correspond to products of matrices from .
The obtained NFA then has the property that is sometimes called diamond-free: for every two states and every word , there is at most one path from to labelled by . A simple reachability argument shows that the length of a shortest word labelling two such paths, if it exists, is at most quadratic in the dimension of the matrices. Hence, deciding whether an NFA is a UFA (and thus whether a set of zero-one matrices generates a zero-one monoid) is in coNL = NL. It is actually NL-complete as described in the next subsection.
A UFA is called complete if it does not admit a word whose matrix is the zero matrix. For an -state UFA the length of such a word if it exists is at most [34]. The best known lower bound is quadratic in , and is achieved by a series of DFAs [44]. For UFAs, the quadratic upper bound was conjectured to be tight [43, Conjecture 2]. Checking if a UFA is complete can be done in NC2 [34].
The real rank of a UFA is the minimum among the real ranks of the matrices corresponding to words. It was shown in [14] that for an -state UFA of real rank there always exists a word of minimum rank of length . For -state strongly connected Eulerian UFAs of rank one, a subclass with remarkably nice properties, there always exists a word of length at most of rank one [15, Corollary 4]. All mentioned constructions also provide polynomial time algorithms that construct words with the required properties (in particular, with a length within the stated bounds).
Applications to variable-length codes
A variable-length code (or simply a code) is a set of finite words over an alphabet such that every finite word over has at most one factorisation over . In other words, a code is a basis of a free submonoid of .
The definitions of both UFAs and codes rely, intuitively, on the uniqueness of certain representations. In fact, UFAs and codes are tightly related. Let us illustrate this relationship. If the cardinality of a code is finite, one can construct its flower automaton, which is a UFA with a chosen state such that, for each word from , there is a separate cycle containing and labelled by this word, see Figure 1 (left) for an example. More generally, codes that are regular languages correspond precisely to strongly connected UFAs in a similar way, see [9, Chapter 4] for the details.
A useful corollary of the construction of the flower automaton is the fact that deciding if a set of zero-one matrices generates a zero-one monoid is NL-hard. Indeed, a finite set of words is a code if and only if its flower automaton is unambiguous [9]. Deciding if a finite set of words is a code is NL-complete [45], and the flower automaton can be constructed in AC0.
A code over is called complete if every word over is a factor of a concatenation of codewords, that is, for every word there exist with . A code that is a regular language is complete if and only if the corresponding UFA is complete [9]. For complete codes that are regular languages, the real rank of the corresponding UFA is equal to a natural and important parameter called the degree of a code [9, Proposition 9.6.1].
Let us explain the intuition behind the notion of degree, see [9, Chapter 9.6] for the formal definitions. For each word we can consider all possible factorisations over of all its extensions with , called interpretations of . Two such interpretations either match in at least one position (as in Figure 1 (top right) between the second and the third letter), or do not match in any position (as in Figure 1 (bottom right)), in which case they are called disjoint. The degree of a word is the number of pairwise disjoint interpretations of this word. The degree of a code is the minimum nonzero degree of all words .
A particularly important case is when a complete code has degree one. Then there exists a word (called a synchronising word) such that for any concatenation of codewords with we have . Intuitively, this means that the two halves and can be decoded separately and independently.
Computational complexity classes
In this paper, we characterise the computational complexity of problems by showing that they belong to the classes , see [2, 23] for their formal definitions. NL is the class of problems solvable in nondeterministic logarithmic time. is the class of problems solvable by -depth polynomial-size bounded fan-in Boolean circuits, and NC is the union of these classes for all . The class NC represents problems that have efficient parallel algorithms, and is a subclass of problems solvable in polylogarithmic space [2]. Intuitively, NC is the class of problems that can be solved using local computations, as opposed to P-complete problems, which are inherently sequential and thus require storing the entire structure in the memory unless . Showing that a problem is in NC also allows to obtain a polynomial space algorithm for the case of exponential-size input computed by a PSPACE-transducer (as done, e.g., in [3]), which is not necessarily true for arbitrary problems solvable in polynomial time.
Our contributions
The known results about reachability properties of zero-one matrix monoids (including the special case of complete DFAs), such as [14, 20, 46, 34], mostly construct a product of minimum rank iteratively, with each iteration decreasing the number of different rows or the rank of a matrix. Such an approach is inherently sequential, since the matrix in the new iteration has to depend on the previous one, which thus has to be constructed explicitly. In particular, this requires matrix multiplication at every step, which heavily increases the time complexity. In this paper, we take a different direction by strongly relying on linear algebra. While linear-algebraic arguments are used widely in the synchronising automata literature, they mostly serve to decrease the number of iterations in the iterative scheme described above. Our approach is to instead relate the rank of a zero-one matrix monoid to efficiently computable linear-algebraic properties, without explicitly constructing a matrix of minimum rank.
Our first main result is that computing the rank of a zero-one matrix monoid provided in the input by a generating set of matrices of dimension (or, equivalently, by a UFA with states and letters) is in NC2 (Theorem 18) and can be done in time (Theorem 22). Previously, it was not known that this problem is in NC, not even for complete DFAs or finite complete codes. Moreover, the naive implementation of the polynomial time algorithm from the literature works in time [14].
These results rely on a new concept of weight of the matrices in a complete zero-one monoid. This theory of matrix weight, which we develop in Section 4, is our main technical contribution. Matrix weight is a natural generalisation of an existing notion of weight of columns of matrices in complete DFAs, which was used, e.g., in connection with the road colouring problem [21, 29, 25]. We show that all matrices in a zero-one matrix monoid have the same weight, and that this weight is tightly related to both the rank of the monoid and to the maximal weight of the columns and rows of its matrices (Section 4.3). This connection allows us to reduce the computation of the monoid rank to the computation of maximal column and row weight. Then we show that we can instead compute the weight of “maximal pseudo-columns” and “maximal pseudo-rows”, as they have the same weight as maximal columns and rows, respectively (Section 4.4). Finally, we transfer linear-algebraic techniques from the literature on weighted automata to compute those weights, and thus the rank of the monoid, efficiently (Section 5 and Section 6.2).
We complement the linear-algebraic algorithms with a combinatorial algorithm, our second main contribution. While the latter has higher time complexity of in the general case (Theorem 23), it also constructs a matrix of minimum rank in addition to computing the rank of the monoid. For complete DFAs, our combinatorial algorithm runs in time (Theorem 24), thus outmatching the linear-algebraic counterpart and improving upon the algorithm known before [43]. The two key technical ingredients of our combinatorial algorithm are explained in the beginnings of Section 6.3 and Section 6.4. Our results on the time complexity of computing the rank are summarised in the table below.
class | previous best | linear-algebraic algorithm | combinatorial algorithm |
---|---|---|---|
UFA | [14] | (Theorem 22) | (Theorem 23) |
complete DFA | [43] | (see Section 6.2) | (Theorem 24) |
3 Main definitions
Let be a finite set, which we view as a set of states. For we write for the column vector such that if and only if . We may write for . For a column vector we write for the transpose, a row vector. For two column vectors we write if the inequality holds component-wise. We view the elements of (and similar sets) as matrices. Vector and matrix addition and multiplication are defined in the usual way (over ). We denote by the span of a set of vectors, i.e., the set of all linear combinations of with real coefficients. The real rank of a matrix is, as usual, the dimension of the column space of over the field of the reals (which equals the dimension of the row space); i.e., .
Let be a set of matrices from , and be a finite alphabet. We associate the letters with the matrices by setting for . Throughout this paper, when speaking about computational complexity, we assume that the input is the function from letters to zero-one matrices. We can extend naturally (and often implicitly) to by defining . Thus, is a monoid homomorphism from to the matrix monoid generated by . Note that , where denotes the empty word and the identity matrix. In this paper, we consider only monoid morphisms that are unambiguous, i.e., . If is unambiguous, generates a finite matrix monoid .
Viewing the matrices as transition matrices of an automaton, we obtain a nondeterministic finite (semi-)automaton (NFA) with transition relation . Recall that in this paper automata do not have dedicated initial or accepting states, see footnote 2 on page 2. We can extend from letters to words in the usual way so that we have . An NFA is unambiguous (or diamond-free) if for every two states and for every two words there exists at most one with and ; see Figure 2 for an illustration of the forbidden configuration. We denote unambiguous NFAs as UFAs. Recall from the previous section that deciding if an NFA is unambiguous is NL-complete. In the following, we often identify with the corresponding UFA . In particular, a monoid homomorphism is unambiguous if and only if the corresponding NFA is unambiguous.
When (or, equivalently, ) is clear from the context, we may write . Then . Similarly, we may write , so that . We call strongly connected if for all there is with . We call complete if , where is the zero matrix. The real rank of (and of ) is . Note that is complete if and only if .
Suppose that holds for every and , or, equivalently, that every matrix in has exactly one in each row. Then holds for every and . We call such UFAs complete deterministic finite (semi-)automata (complete DFAs) and we may write instead of to highlight that it is a transition function instead of a transition relation. A complete DFA is complete in the sense defined above (i.e., ), and for any we have that is the number of nonzero columns in .
4 Main concepts and the linear algebra toolbox
In this section, we introduce the main tools that we will use for both linear-algebraic and combinatorial algorithms in later sections. Until Section 4.5, we fix an unambiguous, complete, and strongly connected monoid morphism . In Section 4.5 we will show that the case where is not strongly connected can be easily reduced to the strongly connected case.
4.1 Columns, rows and the structure of minimum rank matrices
The concept of maximum columns and rows plays a crucial role in dealing with reachability problems in unambiguous monoid morphisms. Abusing language slightly in the following, by column we refer to column vectors of the form where and . Similarly, a row is of the form . See Figure 3 for an example. In the case of complete DFAs, all rows are of the form . This fact makes complete DFAs significantly simpler to deal with than general complete UFAs.
A column is called maximal if there is no column such that and (that is, ). Maximal rows are defined in the same way. Recall that the inequalities are taken component-wise.
Let be a zero-one matrix. One can view as the least number such that there are matrices and with . Define the unambiguous rank as the least number such that there are matrices and such that . Analogously to , define also . Clearly, , and the inequality can be strict, but in Corollary 2 below we show that . The reason we are interested in the unambiguous rank is that Theorem 1 below implies that there is always a matrix with a very simple structure such that its unambiguous rank is equal to its real rank and both ranks are minimum.
In the following let us write when is understood. A word is of minimum unambiguous rank if . If is of minimum unambiguous rank then so is for all .
Words of unambiguous rank one, known as synchronising words, play an especially important role due to their applications in the theory of codes, as explained in Section 2. It is easy to see that a word has unambiguous rank one if and only if there exist such that maps a state to a state if and only if and . For complete DFAs, we moreover have that and has cardinality one.
Theorem 1 (Césari [17]).
Let be of minimum unambiguous rank. There are pairwise disjoint sets and pairwise disjoint sets such that
Moreover, each and is, respectively, a maximal column and a maximal row.
This theorem will play a central role. A proof can be found in [5, Proposition 4]. In the case of a complete DFA, is a singleton and Theorem 1 is fairly obvious.
In Theorem 1, since the are pairwise disjoint and the are pairwise disjoint, each forms, intuitively, a “combinatorial rectangle”, and no such rectangle shares a row or a column with any other rectangle. The column vectors are exactly the nonzero columns of and linearly independent, and the row vectors are exactly the nonzero rows of and linearly independent. Thus, is the number of distinct nonzero columns and also the number of distinct nonzero rows in . It follows that . Thus we have:
Corollary 2.
We have .
We can thus define as . For words that are not of minimum unambiguous rank, we may have , but the rank of such matrices will rarely play a role in the following. In what follows, we call words of minimum unambiguous rank simply words of minimum rank. Since we never refer to the real rank of words below, this will not lead to any confusion.
4.2 The weight of columns and rows
The results in this subsection, about the column and row vectors that appear in the matrices , are mostly due to [17]; see also [5, Section 3]. Since a notion of column and row weight will be crucial for us in the later development, we phrase and prove the results around these concepts, but we do not view the lemmas of this subsection as novel.
Define . Since is strongly connected, is irreducible. Since is unambiguous, the spectral radius of is at most , and since is complete, it is at least . Thus, the spectral radius of equals . Since is irreducible, it follows from basic Perron-Frobenius theory that has an eigenvalue and every right eigenvector with eigenvalue is a multiple of a strictly positive vector, say . Since has only rational entries, we can assume . Similarly for left eigenvectors. Therefore, there are with and . Without loss of generality, we assume that .
In the complete DFA case, since for all , we have and so it is natural to take . In that case, means that is the (unique) stationary distribution of the Markov chain whose transition probabilities are given by the row-stochastic matrix ; intuitively, in this Markov chain a letter is picked uniformly at random in every step.
Define the weight of a column and of a row by and , respectively. Denote the maximum column weight and the maximum row weight by and , respectively, i.e.,
A column is called of maximum weight if , and analogously for rows. In the complete DFA case, every row is of the form for some , hence every row is of maximum weight.
Lemma 3.
A column (respectively, row) is maximal if and only if it is of maximum weight.
An important property that we will need later is that the set of maximal columns is closed under left multiplication by matrices from the monoid, as stated in the following lemma. Note that this is no longer true without the completeness assumption, and is the key reason why the case of complete matrix monoids is easier to deal with.
Lemma 4.
Let and be such that is a maximal column. Then is a maximal column for all .
The following lemma will be useful later to construct minimum rank matrices from maximal columns and rows.
Lemma 5.
Let be such that all non-zero columns and rows in are maximal. Then is of minimum rank.
4.3 Weight preservation property and minimum rank
Every word of minimum rank in a complete DFA induces a partition of the state set into subsets of states mapped by to the same state (that is, into columns). It was observed by Friedman in [21] that all sets of such a partition have the same weight. This observation has many applications to variations of the road colouring problem [21, 29, 25, 30]. Moreover, it was proved in [25, Theorem 6], again in connection with road colouring, that for every the weights of all columns in sum up to (assuming as suggested previously). This can be seen as a weight preservation property: the total weight of columns in the matrix of a word is preserved under multiplication by any matrix from the monoid. As a result we get that , and hence . The proof of the weight preservation property for complete DFAs is quite simple and relies on the fact that for a state and a word the set is always a singleton. For complete UFAs this is no longer true; in particular, can be the empty set, thus permanently “losing” some weight collected in . Hence, a more refined property is required. The following result provides such a property. It also turns out that its proof requires more sophisticated techniques than in the complete DFA case.
Theorem 6.
For all we have .
Similarly to the complete DFA case, this result allows us to reduce computing to computing and , which we will use later in our algorithms. Recall that we have defined and so that .
Towards a proof of Theorem 6 we first prove the following lemma.
Lemma 7.
Let be of minimum rank. Then .
Proof.
Let be as in Theorem 1. Each and each is of maximum weight. Thus,
We also need the following proposition.
Proposition 8.
Let and be such that holds for all of minimum rank. Then holds for all .
Proof.
Let . Let be of minimum rank. Recall that every word that contains as a factor is of minimum rank. For every , partition into sets and so that and ; i.e., are the sets of length- words that do or do not contain as a factor, respectively. For all both and are of minimum rank. Thus, we have for all . It follows that
(1) |
Let be such that for all . Then we have
(2) |
Let . Define . We can view as the probability of picking a word in when a word of length is picked uniformly at random. We have , as in order to avoid as a factor, it has to be avoided in each of the consecutive blocks of length . Thus, . We have
With Equation 2 it follows that Since this holds for all and , we conclude that .
Now we prove Theorem 6.
Proof of Theorem 6.
4.4 Maximal pseudo-columns
In this subsection, we define maximal pseudo-columns, which are vectors that can be seen as a relaxation of the notion of maximal columns. We show that the weight of a maximal pseudo-column is equal to the weight of a maximal column, and a maximal pseudo-column is a solution of a system of linear equations, and thus can be computed efficiently. By invoking Theorem 6, this will allow us to efficiently compute .
Denote by the set of maximal columns. By Theorem 1 (bearing in mind also Corollary 2), the vector space spanned by all maximum columns, , is at least -dimensional:
Proposition 9.
We have .
One might hypothesise that or even that all minimum-rank matrices have the same nonzero (hence, maximum) columns. The following example shows that neither is the case in general, not even for complete DFAs.
Example 10.
Consider the complete DFA with and
By symmetry, we have . Since no word maps states and to the same state, we have ; i.e., and are both minimum rank. Further, consists exactly of the four nonzero columns in and . Their span is -dimensional, as is orthogonal to each maximum column. Thus, .
Define the vector space . Intuitively, it is the set of all differences of weight distributions over the states before and after a word is applied. Notice that for all we have . Later (see the proof of Lemma 19 below) we show that is closed under post-multiplication with for all . Such “forward spaces” play an important role in weighted automata; see, e.g., [32]. Denote the orthogonal complement of by ; i.e., . Intuitively, it is the set of vectors whose weight does not change under pre-multiplication with for any (where by the weight of a vector we understand ). Clearly, . The following proposition follows immediately from Lemma 4.
Proposition 11.
We have .
It follows that is a subspace of . With Proposition 9, we have . One might hypothesise that . The following example shows that this is not the case in general, not even for complete DFAs.
Example 12.
Consider the DFA with and
We have and . Thus, . Hence, .
On the other hand, by symmetry we have . For any ,
It follows that . Thus, , and is a strict subspace of . For example, the vector is in but not in .
Although the dimension of does not generally equal , the vector space turns out useful for computing . Recall that, by Theorem 6, we can obtain by computing (and, symmetrically, ). For define
Intuitively, consists of the states that can “appear” in a column together with , or, equivalently, the states that are “mergeable” with (that is, can be mapped to the same state by a word). Note that . We will need the following lemma which is easy to prove.
Lemma 13.
Let and be such that is a maximal column. Then holds for all .
We call a vector a maximal pseudo-column if there is with and for all . This notion, which is closely related to the “pseudo-cuts” from [35], can be seen as a relaxation of the notion of a maximal column: clearly, every maximal column is a maximal pseudo-column, but the converse is not true, since a maximal pseudo-column is not necessarily a vector over , let alone a column in the strict sense, i.e., of the form . The following lemma however shows that the weight of a maximal pseudo-column is equal to the weight of a maximal column. We will later show that computing the former can be done in NC2.
Lemma 14.
Let be a maximal pseudo-column. Then .
Proof.
Let be such that and for all . Let be such that is a maximal column. We have
Example 15.
We continue Example 10. We have and . Let . Then . Since and , vector is a maximal pseudo-column. Thus, by Lemma 14, .
Theorem 16.
Let be a basis of , and let . Then the following linear system for has a solution, and all its solutions are maximal pseudo-columns:
Proof.
By Proposition 11, any maximal column solves the linear system. Let be a solution of the linear system. The equations on the first line guarantee that . Then, the equations on the second and third line guarantee that is a maximal pseudo-column.
4.5 Dealing with the non-strongly connected case
The following lemma shows that in order to compute the minimum rank we can focus on the strongly connected case.
Proposition 17.
Let be an unambiguous matrix monoid morphism. Suppose that is a partition of such that for all it holds that ; i.e., for all matrix has the block form where and and . We have and .
By a straightforward induction it follows from Proposition 17 that the minimum rank of an unambiguous matrix monoid is the sum of the minimum ranks of its strongly connected components (where “incomplete” components count as having rank ).
5 Computing the rank in NC2
In this section, we prove our first main result, which is as follows.
Theorem 18.
The problem of computing the (real) rank of an unambiguous matrix monoid is in .
In order to use Theorem 16, we need the following lemma. We use the notation defined in the previous section. Recall that we defined .
Lemma 19.
If is strongly connected, one can compute a basis of in NC2.
For each define and extend to by defining . Define . Note that here ranges over , i.e., nonempty words, only. By definition, is closed under right multiplication by for all . We first show the following lemma.
Lemma 20.
We have .
Proof.
For the inclusion , we prove by induction on that for all length- words we have . Concerning the induction base, , we have . Concerning the induction step, let , and let and . We have
It holds that , and, by the induction hypothesis, . It follows that .
For the converse, , we proceed similarly by induction. Concerning the induction base, , for all we have . Concerning the induction step, let , and let and . By the induction hypothesis there are and and such that . Thus, we have
as required.
Proof of Lemma 19.
For each define . Using the technique from [33, Section 4.2] (see [32, Proposition 5.2] for a clearer explanation), for each one can compute333In [33, 32] only membership in NC is claimed, but the bottleneck computations are matrix powering and rank computation, which can in fact be done in ; see [18]. The computations in [32, Proposition 5.2] are on polynomially larger matrices, but this does not impact the membership in , as . a basis of in . The union of these bases, say for some , spans , which equals by Lemma 20. To shrink to a basis of , for each include in the basis if and only if . The latter (rank) computation can be done in [27].
Now we can prove Theorem 18.
Proof of Theorem 18.
Let be an unambiguous monoid morphism. Its strongly connected components can be computed in . It follows from the proof of [34, Proposition 3] that one can check each component for completeness in , since a zero-one monoid contains the zero matrix if and only if the joint spectral radius of the set of its generators is strictly less than one [34]. Therefore, using Proposition 17, we can assume in the rest of the proof that is complete and strongly connected.
We use the fact that one can compute a solution of a possibly singular linear system of equations in NC2 [11, Section 5]. First, compute in NC2 vectors with and and . Using Lemma 19 compute in NC2 a basis of . Choose an arbitrary and compute in with a reachability analysis. Then, solve the linear system from Theorem 16 to compute in NC2 a maximal pseudo-column . Hence, using Lemma 14, we can compute in NC2. Symmetrically, we can compute in NC2. Finally, by Theorem 6, we can compute in NC2.
Example 21.
We continue Examples 10 and 15. Since is a complete DFA, it is natural to take . Note that . Since every row is of the form for some , we have . Recall from Example 15 that . With Theorem 6 we conclude that , as observed in Example 10.
6 Time and space complexity
In this section, we study the time and space complexity of computing the rank of a zero-one matrix monoid and finding a matrix of minimum rank in it. We provide two approaches. The first one relies on the linear-algebraic tools developed in Section 4. It turns out to be faster than the second, combinatorial, approach, but is limited to only computing the rank. This is due to the fact that we never explicitly construct a maximal column in this approach, which is required in order to find a matrix of minimum rank by Theorem 1. Moreover, it is not known if one can find a maximal column in NC. In contrast, the combinatorial approach explicitly constructs a matrix of minimum rank step by step. In a way, this is exactly the reason why it is slower than the linear-algebraic approach, since this requires performing a large number of matrix multiplications. In the complete DFAs case, where direct matrix multiplication can be avoided due to the special shape of transformation matrices, it becomes much more efficient, and outmatches its linear-algebraic counterpart by a factor of the alphabet size.
The three main results of this section are as follows.
Theorem 22.
The (real) rank of an -state UFA over an alphabet of size can be computed in time and space.
Theorem 23.
A matrix of minimum (real) rank in an -state UFA over an alphabet of size can be found in time and space.
Theorem 24.
A matrix of minimum (real) rank in an -state complete DFA over an alphabet of size can be found in time and space.
Until the end of the section, fix a strongly connected complete UFA . Denote , . Section 4.5 shows that strong connectivity can be assumed without loss of generality.
6.1 Square automaton and square digraph
We will need the construction of the square automaton of an NFA. The square automaton of is defined as follows. Let , and for and , the transitions are defined component-wise, that is,
Note that the square automaton of a complete DFA is also a complete DFA.
We call states of the form in singletons. Observe that the restriction of to singletons is equal to . We denote by the underlying digraph of obtained by forgetting the labels of the transitions. Note that , and there exists an infinite series of complete UFAs over a two-letter alphabet with [36, Appendix A]. If is a complete DFA, then .
6.2 Minimum rank in time
We now perform the steps of the linear-algebraic algorithm described in Section 5, but implement them efficiently in terms of time and space complexity.
Lemma 25.
For a state , the set can be computed in time and space.
Proof.
Perform a multi-source backwards digraph search starting from all singletons in and label all states such that or is visited during this search. This search can be performed in time linear in the number of edges of , and . Observe that only the vertices of this digraph have to be stored in the memory explicitly, since the edges can be computed on the fly from the input without increasing the time complexity, hence the space complexity of the algorithm is .
Lemma 26.
A maximal pseudo-column can be found in time and space.
Proof.
To use Theorem 16 we first need to set up the linear system of equations described there. The average matrix can be computed in time . The weight vectors can then be computed in time by solving a system of linear equations. Then, using Lemma 25, we compute in time the set for some state . We also need to compute a basis of the vector space defined in Section 4.4. As in the proof of Lemma 19, we compute a basis of , which is the smallest vector space that contains for all and is closed under post-multiplication with for all . This can be done in time and space using a worklist algorithm and keeping a basis in echelon form using Gaussian elimination, as described, e.g., in [32, Section 2]. Finally, we solve the system of linear equations from Theorem 16, which can be done in time. Each step requires at most space.
By Lemma 14, we thus get that and can be computed in time and space, which together with Theorem 6 proves Theorem 22. We remark that for complete DFAs the proof of Lemma 26 gives time for computing the rank. The combinatorial algorithm provided below will improve it to (see Section 6.5), while additionally finding a matrix of minimum rank.
6.3 Efficiently constructing a maximal column
We now consider a more general problem of finding a matrix of minimum rank in a zero-one matrix monoid. As mentioned above, by Theorem 1 we have to explicitly construct a maximal column for that, which turns out to be a more difficult task in terms of the time complexity. We will see in the next subsection that we only need one maximal column (together with a word constructing it) to get a matrix of minimum rank. The goal of this subsection is thus to show how to efficiently compute a short representation of a word constructing a maximal column, since the word itself may be longer than the time complexity we are aiming for. We do so by reusing repeating subwords in such a word, and describing their occurrences with a straight line program.
In [20, Section 5], an algorithm for constructing a word of rank one in complete DFAs in time and space was suggested. Our approach for finding a maximal column and a word constructing it follows a similar direction, with a few key differences. Firstly, we observe that it is enough to only compute words merging states with one chosen state , instead of finding all pairs of mergeable states. This both simplifies the algorithm (in [20] an additional step is required to decrease the space complexity from to , which we get for free) and allows to present our results in terms of straight line programs, giving a better insight into the regularities present in the constructed word of minimum rank.
We define set straight line programs (set-SLPs), which are just SLPs with multiple initial symbols, and thus encode a set of words instead of one word. Formally, a set-SLP is a tuple , where and are disjoint finite sets of nonterminal and terminal symbols respectively, is a function defining a derivation rule for each nonterminal symbol, and is a set of initial symbols. For , we write as with , and we call and the left- and right-hand sides of this derivation rule respectively. The length of a set-SLP is the total length of the right-hand sides of all the derivation rules. The semantics of a set-SLP is defined as follows. Given an initial symbol , we recursively replace each symbol in with the right-hand side of its derivation rule until we obtain a word over , which is called the word encoded by . We require that each initial symbol produces a unique word over as a result of such derivation. Namely, we require that there exists a total linear order on the set such that for all with , does not contain with . The (multi-)set of words encoded by all initial symbols is called the set of words encoded by a set-SLP.
Example 27.
Consider a set-SLP with
This set-SLP encodes the (multi-)set , and illustrates the reason why we are using set-SLPs: they allow to construct sets of words out of smaller “pieces” without having to explicitly repeat these “pieces” multiple times (in our example, we are reusing ). Note that the set-SLPs that we construct below only encode sets of words whose total length is polynomial in the size of the set-SLPs.
Lemma 28.
Given a state , a set-SLP of length defining a set , where is a word with and , can be computed in time and space.
Proof sketch.
Call vertices with merging. The idea is to construct, by a digraph search of , a directed tree rooted in and containing a path from each merging vertex to the root, and then use the joint subpaths of these paths in the tree to obtain a short set-SLP describing these paths. See Figure 5 for an example.
Lemma 29.
For a given state of , an SLP of length encoding a word such that is a maximal column can be computed in time and space.
Proof sketch.
Compute the matrices of the words encoded by the set-SLP from Lemma 28. We rely on the property, already used in some form in [14], that if for all , , the vector is zero, then is a maximal column. To construct a word with this property, we iteratively concatenate the words depending on nonzero columns in the matrix in each iteration. The number of iterations is bounded by .
6.4 Finding a matrix of minimum rank
We now use the results of the previous section to construct a matrix of minimum rank. The key idea is as follows: since the set of maximal columns is stable under left multiplication by matrices from the monoid, we can iteratively make each column of the matrix maximal or zero by, intuitively, applying the same word (together with a short “reachability” word) to a state in each column. This simple observation significantly decreases the time complexity of our algorithm compared to the naive implementation of the algorithm from [14] constructing a word of minimum rank. Indeed, in the approach of [14], the word is constructed letter by letter, and requires to know the result of applying the last letter at every step. Since the constructed word has length , where is the number of states and is the rank, this results in time complexity. By efficiently constructing only one maximal column (as described in the previous section) and reusing it for the whole set of states (as described in this section), we decrease the time complexity to .
Proposition 30.
An SLP of length encoding a word of minimum rank can be computed in time and space.
Proof sketch.
Compute the matrix of the word encoded by the SLP from Lemma 29. Iteratively, for each , concatenate with a word of length at most mapping to a state corresponding to a nonzero element of the row in the current iteration. Denote by the resulting word, which has the property that all nonzero columns of are maximal. Symmetrically compute for rows. Then by Lemma 5 the word matrix has minimum rank.
Given an SLP of length encoding a word , we can compute the matrix of by computing the matrices of words occurring in the derivation of from bottom to top in time . Thus we prove Theorem 23. We also get the following result, which can be seen as a proof of a very weak version of the Černý conjecture generalised from rank one words in complete DFAs to minimum rank words in complete UFAs.
Theorem 31.
For every -state complete UFA, there exists an SLP of length encoding a word of minimum rank.
6.5 Complete DFAs
For complete DFAs, we follow the same algorithms as in the proof of Theorem 23, but exploit the fact that elementary matrix operations can be performed more efficiently. Namely, if is a complete DFA, then each word defines a transformation on . By storing matrices of words as transformations, we get that matrix multiplication can be performed in time, and each matrix requires space. Moreover, we have . By taking these improvements into account, we get the proof of Theorem 24.
7 Conclusions and open problems
We list a few open questions that follow directly from our work.
-
In [20], it was asked if a word of rank one for a complete DFA can be found in NC. Similarly, can a matrix of minimum rank for a complete DFA be computed in NC?
-
Given an unambiguous morphism and a vector , can a basis of be computed faster than in time? This would improve algorithms for several fundamental problems for weighted automata [32]. Similarly, for complete DFAs, computing a basis of from Section 4.4 in subcubic time (see the proof of Lemma 26) would allow to compute the minimum rank of a complete DFA faster than in cubic time.
-
The bottleneck in the time complexity in Theorem 22 is the very first step, computing via digraph search in the square digraph of . The number of edges of this digraph can be quadratic in the number of its vertices [36, Appendix A], hence of order . Can be computed faster than in time ? Very little seems to be known about general properties of square automata of DFAs or UFAs.
-
One of the bottlenecks of the combinatorial algorithm is performing matrix multiplication. Can it be done faster for matrices from a zero-one matrix monoid? If not, are there any subclasses of UFAs (other than DFAs) where it can be done more efficiently?
References
- [1] Jorge Almeida and Benjamin Steinberg. Matrix mortality and the Černý-Pin conjecture. In Volker Diekert and Dirk Nowotka, editors, Developments in Language Theory, pages 67–80, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg. doi:10.1007/978-3-642-02737-6_5.
- [2] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009.
- [3] Christel Baier, Stefan Kiefer, Joachim Klein, David Müller, and James Worrell. Markov chains and unambiguous automata. Journal of Computer and System Sciences, 136:113–134, 2023. doi:10.1016/J.JCSS.2023.03.005.
- [4] M.-P. Béal and D. Perrin. Synchronised automata. In Valérie Berthé and Michel Rigo, editors, Combinatorics, Words and Symbolic Dynamics, Encyclopedia of Mathematics and its Applications, pages 213–240. Cambridge University Press, 2016. doi:10.1017/CBO9781139924733.008.
- [5] Marie-Pierre Béal, Eugen Czeizler, Jarkko Kari, and Dominique Perrin. Unambiguous automata. Math. Comput. Sci., 1(4):625–638, 2008. doi:10.1007/S11786-007-0027-1.
- [6] Paul C. Bell, Igor Potapov, and Pavel Semukhin. On the mortality problem: From multiplicative matrix equations to linear recurrence sequences and beyond. Information and Computation, 281:104736, 2021. doi:10.1016/J.IC.2021.104736.
- [7] Stuart J. Berkowitz. On computing the determinant in small parallel time using a small number of processors. Information Processing Letters, 18(3):147–150, 1984. doi:10.1016/0020-0190(84)90018-8.
- [8] Mikhail V. Berlinkov. On two algorithmic problems about synchronizing automata (short paper). In Arseny M. Shur and Mikhail V. Volkov, editors, Developments in Language Theory – 18th International Conference, DLT 2014, Ekaterinburg, Russia, August 26-29, 2014. Proceedings, volume 8633 of Lecture Notes in Computer Science, pages 61–67. Springer, 2014. doi:10.1007/978-3-319-09698-8_6.
- [9] Jean Berstel, Dominique Perrin, and Christophe Reutenauer. Codes and automata, volume 129. Cambridge University Press, 2010.
- [10] Vincent D. Blondel, Raphaël M. Jungers, and Alex Olshevsky. On primitivity of sets of matrices. Automatica, 61:80–88, 2015. doi:10.1016/J.AUTOMATICA.2015.07.026.
- [11] Allan Borodin, Joachim von zur Gathen, and John E. Hopcroft. Fast parallel matrix and GCD computations. Information and Control, 52(3):241–256, 1982. doi:10.1016/S0019-9958(82)90766-5.
- [12] Greg Budzban and Philip Feinsilver. The generalized road coloring problem and periodic digraphs. Applicable Algebra in Engineering, Communication and Computing, 22:21–35, 2011. doi:10.1007/S00200-010-0135-Z.
- [13] Georgina Bumpus, Christoph Haase, Stefan Kiefer, Paul-Ioan Stoienescu, and Jonathan Tanner. On the size of finite rational matrix semigroups. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, July 8-11, 2020, Saarbrücken, Germany (Virtual Conference), volume 168 of LIPIcs, pages 115:1–115:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.ICALP.2020.115.
- [14] Arturo Carpi. On synchronizing unambiguous automata. Theoretical Computer Science, 60:285–296, 1988. doi:10.1016/0304-3975(88)90114-4.
- [15] Arturo Carpi and Flavio D’Alessandro. Strongly transitive automata and the Černý conjecture. Acta Informatica, 46(8):591–607, 2009. doi:10.1007/S00236-009-0106-7.
- [16] Julien Cassaigne, Vesa Halava, Tero Harju, and François Nicolas. Tighter undecidability bounds for matrix mortality, zero-in-the-corner problems, and more. CoRR, abs/1404.0644, 2014. arXiv:1404.0644.
- [17] Yves Césari. Sur l’application du théorème de Suschkewitsch à l’étude des codes rationnels complets. In Jacques Loeckx, editor, Automata, Languages and Programming, 2nd Colloquium, University of Saarbrücken, Germany, July 29 - August 2, 1974, Proceedings, volume 14 of Lecture Notes in Computer Science, pages 342–350. Springer, 1974. doi:10.1007/3-540-06841-4_73.
- [18] Stephen A. Cook. A taxonomy of problems with fast parallel algorithms. Inf. Control., 64(1-3):2–21, 1985. doi:10.1016/S0019-9958(85)80041-3.
- [19] Laszlo Csanky. Fast parallel matrix inversion algorithms. SIAM Journal on Computing, 5(4):618–623, 1976. doi:10.1137/0205040.
- [20] David Eppstein. Reset sequences for monotonic automata. SIAM Journal on Computing, 19(3):500–510, 1990. doi:10.1137/0219033.
- [21] Joel Friedman. On the road coloring problem. Proceedings of the American Mathematical Society, 110(4):1133–1135, 1990.
- [22] Balázs Gerencsér, Vladimir V. Gusev, and Raphaël M. Jungers. Primitive sets of nonnegative matrices and synchronizing automata. SIAM Journal on Matrix Analysis and Applications, 39(1):83–98, 2018. doi:10.1137/16M1094099.
- [23] Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge University Press, 2008. doi:10.1017/CBO9780511804106.
- [24] Pavel Goralčík and Václav Koubek. Rank problems for composite transformations. International Journal of Algebra and Computation, 05(03):309–316, 1995. doi:10.1142/S0218196795000185.
- [25] Vladimir V. Gusev and Elena V. Pribavkina. On synchronizing colorings and the eigenvectors of digraphs. In Piotr Faliszewski, Anca Muscholl, and Rolf Niedermeier, editors, 41st International Symposium on Mathematical Foundations of Computer Science, MFCS 2016, August 22-26, 2016 - Kraków, Poland, volume 58 of LIPIcs, pages 48:1–48:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.MFCS.2016.48.
- [26] Markus Holzer and Sebastian Jakobi. On the computational complexity of problems related to distinguishability sets. Information and Computation, 259(2):225–236, 2018. doi:10.1016/J.IC.2017.09.003.
- [27] Oscar H. Ibarra, Shlomo Moran, and Louis E. Rosier. A note on the parallel complexity of computing the rank of order matrices. Information Processing Letters, 11(4/5):162, 1980. doi:10.1016/0020-0190(80)90042-3.
- [28] Gérard Jacob. Un algorithme calculant le cardinal, fini ou infini, des demi-groupes de matrices. Theoretical Computer Science, 5(2):183–204, 1977. doi:10.1016/0304-3975(77)90006-8.
- [29] Jarkko Kari. A counter example to a conjecture concerning synchronizing words in finite automata. Bulletin of the EATCS, 73:146, 2001.
- [30] Jarkko Kari, Andrew Ryzhikov, and Anton Varonka. Words of minimum rank in deterministic finite automata. In Piotrek Hofman and Michal Skrzypczak, editors, Developments in Language Theory – 23rd International Conference, DLT 2019, Warsaw, Poland, August 5-9, 2019, Proceedings, volume 11647 of Lecture Notes in Computer Science, pages 74–87. Springer, 2019. doi:10.1007/978-3-030-24886-4_5.
- [31] Nasim Karimi. Reaching the minimum ideal in a finite semigroup. Semigroup Forum, 94(2):390–425, 2017.
- [32] Stefan Kiefer. Notes on equivalence and minimization of weighted automata. https://arxiv.org/abs/2009.01217, 2020. arXiv:arXiv:2009.01217.
- [33] Stefan Kiefer, Ines Marusic, and James Worrell. Minimisation of multiplicity tree automata. Logical Methods in Computer Science, 13(1), 2017. doi:10.23638/LMCS-13(1:16)2017.
- [34] Stefan Kiefer and Corto N. Mascle. On nonnegative integer matrices and short killing words. SIAM Journal on Discrete Mathematics, 35(2):1252–1267, 2021. doi:10.1137/19M1250893.
- [35] Stefan Kiefer and Cas Widdershoven. Efficient analysis of unambiguous automata using matrix semigroup techniques. In Peter Rossmanith, Pinar Heggernes, and Joost-Pieter Katoen, editors, 44th International Symposium on Mathematical Foundations of Computer Science, MFCS 2019, August 26-30, 2019, Aachen, Germany, volume 138 of LIPIcs, pages 82:1–82:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.MFCS.2019.82.
- [36] Stefan Kiefer and Cas Widdershoven. Efficient analysis of unambiguous automata using matrix semigroup techniques. CoRR, abs/1906.10093, 2019. arXiv:1906.10093.
- [37] A.A. Klyachko, I.K. Rystsov, and M.A. Spivak. An extremal combinatorial problem associated with the bound on the length of a synchronizing word in an automaton. Cybernetics, 23(2):165–171, 1987.
- [38] Douglas A. Lind and Brian Marcus. An introduction to symbolic dynamics and coding. Cambridge university press, 2021.
- [39] Arnaldo Mandel and Imre Simon. On finite semigroups of matrices. Theoretical Computer Science, 5(2):101–111, 1977. doi:10.1016/0304-3975(77)90001-9.
- [40] Mike Paterson. Unsolvability in matrices. Studies in Applied Mathematics, 49:105–107, 1970.
- [41] Vladimir Yu. Protasov. Analytic methods for reachability problems. Journal of Computer and System Sciences, 120:1–13, 2021. doi:10.1016/J.JCSS.2021.02.007.
- [42] Vladimir Yu. Protasov and Andrey S. Voynov. Matrix semigroups with constant spectral radius. Linear Algebra and its Applications, 513:376–408, 2017.
- [43] Igor Rystsov. Rank of a finite automaton. Cybernetics and Systems Analysis, 28(3):323–328, 1992.
- [44] Igor K. Rystsov. Reset words for commutative and solvable automata. Theoretical Computer Science, 172(1-2):273–279, 1997. doi:10.1016/S0304-3975(96)00136-3.
- [45] Wojciech Rytter. The space complexity of the unique decipherability problem. Information Processing Letters, 23(1):1–3, 1986. doi:10.1016/0020-0190(86)90121-3.
- [46] Andrew Ryzhikov. Mortality and synchronization of unambiguous finite automata. In Robert Mercas and Daniel Reidenbach, editors, Combinatorics on Words – 12th International Conference, WORDS 2019, Loughborough, UK, September 9-13, 2019, Proceedings, volume 11682 of Lecture Notes in Computer Science, pages 299–311. Springer, 2019. doi:10.1007/978-3-030-28796-2_24.
- [47] Andrew Ryzhikov. On shortest products for nonnegative matrix mortality. In Laura Kovács and Ana Sokolova, editors, Reachability Problems – 18th International Conference, RP 2024, Vienna, Austria, September 25-27, 2024, Proceedings, volume 15050 of Lecture Notes in Computer Science, pages 104–119. Springer, 2024. doi:10.1007/978-3-031-72621-7_8.
- [48] Sujin Shin and Jisang Yoo. A note on the rank of semigroups. Semigroup Forum, 81(2):335–343, 2010.
- [49] Mikhail V. Volkov. Synchronizing automata and the Černý conjecture. In Carlos Martín-Vide, Friedrich Otto, and Henning Fernau, editors, Language and Automata Theory and Applications, Second International Conference, LATA 2008, Tarragona, Spain, March 13-19, 2008. Revised Papers, volume 5196 of Lecture Notes in Computer Science, pages 11–27. Springer, 2008.
- [50] Mikhail V. Volkov. Synchronization of finite automata. Russian Mathematical Surveys, 77(5):819–891, 2022. doi:10.4213/rm10005e.
- [51] Yaokun Wu and Yinfeng Zhu. Primitivity and Hurwitz primitivity of nonnegative matrix tuples: A unified approach. SIAM Journal on Matrix Analysis and Applications, 44(1):196–211, 2023. doi:10.1137/22M1471535.