Persistent (Co)Homology in Matrix Multiplication Time
Abstract
Most algorithms for computing persistent homology do so by tracking cycles that represent homology classes. There are many choices of such cycles, and specific choices have found different uses in applications. Although it is known that persistence diagrams can be computed in matrix multiplication time for the more general case of zigzag persistent homology [21], it is not clear how to extract cycle representatives, especially if specific representatives are desired. In this paper, we provide the same matrix multiplication bound for computing representatives for the two choices common in applications in the case of ordinary persistent (co)homology. We first provide a fast version of the reduction algorithm, which is simpler than the algorithm in [21], but returns a different set of representatives than the standard algorithm [15]. We then give a fast version of a variant called the row algorithm [10], which returns the same representatives as the standard algorithm.
Keywords and phrases:
persistent homology, matrix multiplication, cycle representativesFunding:
Dmitriy Morozov: supported by the Director, Office of Science, Office of Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Computing methodologies Algebraic algorithms ; Mathematics of computing Algebraic topology ; Computing methodologies Linear algebra algorithms ; Theory of computation Computational geometryEditors:
Oswin Aichholzer and Haitao WangSeries and Publisher:

1 Introduction
Persistent homology [15] is one of the core techniques in topological data analysis. Because of its theoretical importance as well as its wide use in applications, a lot of work has been dedicated to its efficient computation both in theory [21, 10, 7, 4, 20] and in practice [2, 3, 1, 18, 17, 26]. On the theoretical side, the connection between persistent homology and Gaussian elimination was established early on [27], and as a result it has been long accepted that persistence can be computed in matrix multiplication time. Milosavljevic et al. [21] gave an explicit algorithm for the more general case of zigzag persistence [5, 6]. The main difficulty in applying standard linear algebra techniques is the strict constraint on the row and column ordering of the boundary matrix.
As persistent homology has evolved as a field, the significance of its formulation as an decomposition [8] of the input boundary matrix has become clear. The cycles and chains recovered from this decomposition found many uses in attributing topological features to the input data [11, 9, 24]. However, it is not immediately clear from the algorithm in [21] how to recover the matrices and necessary in applications. Rather than deconstructing the algorithm of Milosavljevic et al. [21], we simplify it for the case of ordinary persistence. We describe how to compute two different forms of the decomposition that come up in applications – lazy and exhaustive reductions. This allows us to recover specific cycle representatives in matrix multiplication time. As a benefit, due to its simpler setting, the algorithms in this paper are greatly simplified compared to [21]. Finally, using Dey and Hou’s fast zigzag construction [13], which reduces computation of zigzag persistence to ordinary persistence; the algorithms in this paper give another way to compute zigzag persistence in matrix multiplication time.
2 Preliminaries
We are concerned with the persistent homology of a filtration, an increasing sequence of simplicial complexes indexed by the natural numbers :
While persistent homology can be defined more generally, from an algorithmic perspective this is sufficiently general as nearly all considered cases may be reduced to this setting. We further assume that , or equivalently that the filtration is a total order, i.e., each step in the filtration adds a single cell. If the initial filtration does not satisfy this requirement, and defines only a partial order, we may always extend it to an arbitrary compatible total order, e.g., breaking ties by lexicographical ordering and dimension.
Notation.
Throughout, we use the following notation:
-
Matrices are denoted by bold capital letters, e.g., , with sub-matrices indexed by square brackets, e.g., the -th column is , the -th row is .
-
Indexing starts at 1 and indices may be indexed by sets, e.g., , refers to the first columns. Note that the set may not refer to contiguous columns/rows.
-
We consider permutations implicitly, so refers to the rows indexed after the permutation is applied to matrix . For all the algorithms, performing an explicit permutation, then undoing it, does not affect the asymptotic running time.
2.1 Persistence Algorithm
For completeness, we recount the standard persistence algorithm, see [15], [27], and [16] for a more complete description. Let denote the boundary operator, which we represent as an -matrix over some fixed (finite) field . The rows (top-down) and columns (left-right) are ordered by appearance in the filtration. Hence, the -th row and column represents the simplex and represents the boundary of respectively. The goal is to compute the -decomposition [8]. That is, find matrices and such that
where is a matrix in reduced column echelon form and is a full-rank upper-triangular matrix; we let . We introduce the helper function
which is defined whenever . When defined, it returns the largest row index where the column is non-zero; we refer to these lowest non-zero elements as pivots. Throughout this paper, always applies to . The persistence diagram is then given by:
(1) |
The standard algorithm is a variant of Gaussian elimination. Algorithm 1, which we call the lazy reduction, reduces each column by considering pivots in the columns to the left. In each step of the loop, it removes the lowest non-zero entry until a new pivot is found or the entire column is zeroed out. A straightforward analysis gives a running time bound of . In [22], a filtration where this algorithm takes cubic time was given, showing that the bound is tight. The updates of matrix follow those of matrix . The updates in matrix undo the updates in to maintain . Because the columns of are processed from left to right, during the update of matrix , the row has a single (diagonal) element. Therefore, the update in is equivalent to just setting the entry to .
We introduce another variant of persistence computation used in applications. This one “looks ahead” and eliminates as many elements from the matrix as it can; so we call it the exhaustive reduction. In Algorithm 2, by the time a column is considered, it is already reduced. It is then applied to all columns to the right, zeroing out the entire row. In this way, when a column is processed, all previous pivots have already been applied. Because column may have already been used to reduce another column , when an update is applied to it, the row need not consist of a single diagonal element. Hence, the row update in matrix is not equivalent to just setting to , like in the lazy version of the algorithm.
Lazy vs. Exhaustive.
The algorithms based on the two reductions are in a sense the extreme cases: lazy reduction reduces columns only when necessary; and the exhaustive reduction reduces them as much as possible. To illustrate the difference, in the exhaustive reduction, the entire row to the right of a pivot will be zeroed out, whereas in the case of the lazy reduction, only those columns will be zeroed out which have a conflicting pivot in the same row. There are, of course, any number of possible other reductions between these – but we do not know of any applications that rely on them.
Remark 1.
As noted in [10, Section 3.4], persistent cohomology can be computed by performing the same algorithm on the anti-transpose of . As such all the algorithms in this paper can compute persistent cohomology.
2.2 Cycle Representatives
Given an decomposition, we distinguish between two types of simplices in the filtration: positive simplices create new homology classes; negative simplices destroy them. From the matrix decomposition, one can recover a set of cycle representatives of persistent homology. There are two sources of this information, although with different content.
-
1.
matrix: The columns of positive simplices in correspond to zero columns in the reduced matrix . These columns in are cycles, by definition.
-
2.
matrix: The columns of negative simplices in are non-zero. They store linear combinations of boundaries, which are cycles (at the time of their birth) that eventually die in the filtration.
The cycles one recovers from matrix differ between the lazy and exhaustive reductions. Exhaustive reduction was called total reduction by Cohen-Steiner et al. [9]. They show that these cycles form a lexicographically optimal basis, which can be used to triangulate point cloud data. Nigmetov and Morozov [24] use the lazy reduction – specifically, the columns and rows from matrices and – to recover “critical sets,” subsets of simplicial complexes affected by optimization with the loss formulated in terms of a persistence diagram.
Minimum Spanning Acycle Basis.
The negative simplices in the filtration represent a minimal spanning acycle (MSA) [19, 25]. The columns in obtained from either lazy or exhaustive reduction – or any reduction that never adds columns of positive simplices to other columns – have the form . The support of the chain is one (positive) simplex plus a linear combination of simplices in the minimum spanning acycle.
Lemma 2.
The representatives from are the same for lazy and exhaustive reduction.
Proof.
Which columns of are non-zero does not depend on the algorithm used; the set of negative simplices only depends on the order of the filtration. We observe that the boundaries of the negative simplices form a basis (since their pivots are in distinct rows). By definition, the boundary of a positive simplex may be expressed as a linear combination of boundaries of negative simplices. Since the boundaries of the negative simplices form a basis, this linear combination is unique.
Death Basis.
An alternative is to get cycle representatives directly from the matrix. Its non-zero columns represent cycles because multiplying . This follows directly since , so since These columns give cycle representatives for all finite classes, i.e., those homology classes that eventually die.
Remark 3.
Following [10], this may be used for cohomology – applying the algorithms to , the anti-transpose of to obtain the decomposition . The cocycle representatives are then given by columns in . To see that these are cocycles, observe that by construction, they map to zero via the coboundary operator before the corresponding death time, i.e., below the pivot in the corresponding column of , as we have reversed the indexing with the anti-transpose. Likewise, the cocycle representatives of essential cocycles are given by the columns of whose: corresponding columns in are zero and they must not be coboundaries, i.e., the corresponding rows in must not contain pivots.
3 Matrix Preliminaries
We recount a few classical results on matrix multiplication. We assume the matrices are over some field .
Lemma 4.
Let be an matrix and be a matrix. The product can be computed in -time.
Proof.
Divide into sub-matrices of size :
Each is a product of two matrices, which by definition takes time. There are products to compute yielding a running time of .
Column Operations via Matrix Multiplication.
Here we relate matrix multiplication with the reduction steps used in the persistence algorithms, i.e., representing column operations via matrix multiplication. Assume that we have the following block matrix,
where and are (column) vectors. Reducing which we denote , using the vectors , is expressed as a linear combination
where are the coefficients. Reducing only the -th column of may be written as
where is a column vector where the -th entry is 1 and zero everywhere else. If we consider we can rewrite the reduction of by
or equivalently, the reduced matrix is
Lemma 5.
Multiplying a permutation matrix with an matrix, takes time.
Proof.
As a permutation matrix rewrites each element once, the total number of elements gives the bound.
4 Column Algorithms
Here we describe the algorithm in [21] for the special case of classical persistent homology. To aid in exposition, we first recall the Schur complement. Given a block matrix
assuming is non-singular, we can compute an updated matrix where the rows corresponding to are zeroed out by the following transformation:
(2) |
where is referred to as the Schur complement.
Observation 6.
The matrix encodes the column operations required to zero out , that is, .
A key fact we will use in the algorithm is that matrix inversion has the same algorithmic complexity as matrix multiplication.
Theorem 7.
Inverting a (square) non-singular (upper or lower) triangular matrix can be done in matrix multiplication time.
Proof.
See [23, Appendix] for a simplified proof for the special case of triangular matrices.
4.1 Fast Exhaustive Algorithm
We present a fast version of exhaustive reduction in Algorithm 3. The general structure follows Algorithm 2. However, rather than apply the pivots when we find them, we apply them in batches via the Schur complement through a standard recursion. We first augment the boundary matrix with an identity matrix, which is of size at least , for reasons we shall describe below. To simplify notation, we assume is a power of . If it is not, we can set the size of the identity matrix to with such that . Therefore, our initial matrices are:
Throughout the algorithm, we implicitly perform two types of permutations by taking appropriate subsets of rows and columns. The first are column permutations. To ensure that the appropriate submatrices are invertible, we require that after processing columns of , there are pivots, or, equivalently, that the matrix corresponding to the processed columns is full (column) rank. Because cycles reduce to zero columns, the if-statement in Algorithm 3 permutes a zero column from with a non-zero column from the identity matrix , and records the indices of such columns in list , to eventually undo the permutation.
We must also consider row permutations. To apply Theorem 7, we must find the inverse in Algorithm 3, which is used in the Schur complement. We construct a sequence of the pivot rows in columns . Denoting by the sequence of rows outside of , we denote a permutation that places rows below as :
(3) |
See Figure 2(b-c). The permutation is explicitly given by,
where is the -th entry of the indices in .
Lemma 8.
After permuting the rows by , the bottom-left matrix is lower triangular, where .
We delay the proof until after the description of the algorithm. We now follow a recursion: we split matrix into the first half of the columns denoted by and the second half denoted by . We proceed to call the algorithm on the first half of the matrix. Once we reach a leaf in the recursion tree (shown in Figure 3), the matrix consists of a single column. The key invariant is that whenever we reach a leaf, the corresponding column is reduced with respect to all the columns which come prior to it in the filtration. For the first column, this is tautologically true and we delay the proof for the general case until later.
If we are not in the base case (a single column), we must apply the pivots in the columns indexed by (left half of the submatrix) to the the columns indexed by (right half of the submatrix). As described in Equation 3, we construct the permutation such that
By construction of , is lower-triangular with non-zero diagonal entries and so is full-rank. We can then apply the Schur complement to zero out all the corresponding rows in the columns , and update the rest of the rows:
To update matrix , we perform the same operations on :
Once the left half of the matrix has been applied to the right half, we recurse on the second half, i.e., . Here again, because is lower-triangular full-rank and , the first column of is reduced. We now prove correctness.
Lemma 9.
When the recursion base case is reached for column , for all .
Proof.
For , this is tautological since is empty. For , we observe that entry was made zero when and . As the recursion always splits the column range in half, this condition must be satisfied somewhere in the recursion before the base case of is reached.
Proof of Lemma 8.
We observe that when is applied, the columns corresponding to have a pivot. By construction, places the pivots onto the diagonal in the bottom rows. For any of these rows above the diagonal, the entries are 0 by Lemma 9, implying the bottom matrix is lower triangular.
Theorem 10.
Algorithm 3 computes the persistence diagram correctly.
Proof.
By Equation 1, it suffices to show that for all , is computed correctly. For , this is trivially true. For , observe that at any step of the algorithm, we maintain
as any operations on are applied to . Assuming the pivots are correct for , by the above and Lemma 9, gives a transformation depending only on columns , such that all elements in . If does not exist, was in the span of the previous columns and hence a cycle. If exists, it is distinct from and so we have a new pairing. The correctness of this pairing follows from the Pairing Uniqueness Lemma [8].
Running Time Analysis.
The base case takes time. For a general step where , applying the update requires a row permutation of a matrix which takes time. We must compute which is inverting a matrix. Multiplying it with takes time as both are matrices. Finally, we multiply this product with the matrix , which by Lemma 4, takes time. Solving the recursion, we find that the total running time is for or if . For completeness, we include the derivation in the full version [23].
5 Row Algorithm
While Algorithm 3 produces the exhaustive reduction in matrix multiplication time, a natural question is whether the same can be done for computing the lazy reduction (and its representatives). In this section, we present an algorithm which achieves this. We first give the iterative version before describing how it can be computed in .
The idea behind this reduction is to consider rows from the bottom up. Observe that a pivot in the bottom row can be directly identified in as the earliest (leftmost) non-zero entry, similarly as the pivot in the first column can be directly identified. To formalize the notion of the earliest eligible pivot in a given row, we define
As in the case of , we only apply this function to . The function returns the earliest column which is non-zero and is the lowest such non-zero entry in its column. This condition is important as the earliest (left-most) non-zero entry may have pivots below it. This column is used to zero out the -th row in columns to the right provided they do not already contain a pivot, i.e., if , then the column is applied. This condition is why we obtain equivalence with the lazy reduction – we do not apply the pivot column to later columns which already contain a pivot.
Lemma 11.
Algorithm 4 produces the same decomposition as the lazy reduction.
Proof.
This is equivalent to [10, Theorem 3.1].
5.1 Fast Row Algorithm
The fast version of Algorithm 4 is given in Algorithm 5. As above, it reduces the matrix row by row from the bottom up. This variant does not require matrix inversion and requires less padding. The trade-off is that it requires a more complex sequence of updates. Assuming is an matrix, we set such that . We initialize our matrices as:
To simplify notation, from this point on, we assume is a power of 2 (as ), so the resulting matrix is . This ensures that is defined for all in , i.e., the identity matrix ensures there are no zero rows.
It will be convenient to again consider permutations, but here they will be column permutations. We build up the permutation incrementally. In the -th step, we update the permutation with the column transposition
which we use a shorthand for the two mappings and . As is always defined, two columns are permuted in each step. The permutation after steps is then given by
Note that we apply the permutations in order, i.e., for While the columns corresponding to are fixed at for all permutations, the original column at may be permuted several times. In the algorithm, we omit the subscript; refers to the accumulated permutation. As in the case of the column algorithm, we perform the permutations implicitly. We observe that in the permuted order, we arrange pivots on the left along the anti-diagonal, see Figure 4.
We keep track of the operations directly in the matrix which is initialized as the identity matrix. Before delving into the details of the algorithm, assume that the all rows below the -th row have been reduced. The updates in the -th row are given by
We digress here to explain why this works. As mentioned above, if we consider the column-permuted matrix , it has the pivots in the bottom-left along the anti-diagonal (Figure 4). The -th column corresponds to the column given by (in the original filtration order). Hence, it is the earliest column with a non-zero entry in the -th row from the bottom such that there is no pivot below. For any non-zero entry to the right of , it must occur to the right of in the original filtration order (before permuting), or it would have been returned by rather than the current -th column in the permuted order.
Remark 12.
Hence in Algorithm 5, all non-zeros in the row to the right of are pivots (lowest entries), because every column with a pivot below has been moved to the left of in the permuted order and so no reduction will be applied, i.e., coefficient will be 0.
Finally, as we permute the columns we must also permute the rows of the matrix so that the recorded column operations match. This represents the base case of the recursion. As is initialized as the identity matrix, the case is taken care of implicitly.
Returning to the algorithm, we proceed by recursing as in the column algorithm, but on the rows rather than on the columns. We divide the rows of the sub-matrix into the top and bottom halves. We recurse first on the bottom half. Because the pivots are determined by rows below the current one, we can directly identify the pivot with .
The base case is a single row. By assumption, rows below the current row have been updated. At row , we first find the left-most non-zero entry and perform a column permutation setting the column to the -th column, making a pivot as required. By the reasoning above, we may zero out the row, recording the operations in and we can update .
Though the recursion proceeds as in Algorithm 3, splitting the matrix into the indices in the “first half” and “second half,” a complication is that the columns are processed left to right (same as the indexing), while the rows are processed bottom-up while being indexed top-down. Hence, we introduce:
by which we mean for any , . This indexes the rows from the bottom of the matrix as required by the algorithm. Returning to the recursion, once we processed the columns for , we first apply the updates to the rows ,
shown in orange in Figure 5. Note that though is shown in Figure 5, it is not used explicitly in the algorithm as all the operations in those rows have already been performed. On the other hand, in and in do not need to be reversed, as they both represent column operations which are in the standard order. After this, the first row in is now up-to-date with all operations from rows below it. This allows us to correctly identify for this row and perform the column transposition accordingly. Hence, when we recurse on , the base case can be carried out.
Returning from , we complete the recursion by performing the update operations on the remaining parts of the columns in , shown in green in Figure 5. Note, the columns are indexed by not . Let The column update is given as
Now all columns prior to and including and all rows below and including are up to date. We show the update sequence in Figure 5 (omitting the permutations). Note that in the column update, the rows in are correctly indexed, so no reversal is necessary.
Theorem 13.
Algorithm 5 computes the lazy reduction.
Proof.
The algorithm is equivalent to Algorithm 4, but it performs the operations by batches. Observe that in the base case for row , the column operations recorded in are equal to in the -th step of Algorithm 4. If we performed the operations on the entire column, we would, in fact, obtain exactly Algorithm 4 (since there would be no need for any further updates in the recursion).
Hence, to show correctness, we need to ensure that we can correctly compute in each base case. This is trivially true in the first step. In further steps, we need to ensure that all the column operations have been applied to . We can directly verify that any column operations coming from row , fall in exactly one (with in ) in the recursion and hence are applied before the base case for . It only remains to show that the columns are up to date before the columns operations from rows in are applied in the appropriate rows of , which is done in line 14.
Theorem 14.
Matrix satisfies , or equivalently, .
Proof.
During a regular reduction, Algorithm 4, updates in matrix use row operations to “undo” the column operations in matrices and . When a column in is subtracted from column , which itself has not been used to reduce any other column, its corresponding row contains a single diagonal element – compare the situations in Algorithms 1 and 2. In this case, the update in matrix is equivalent to setting entry to the coefficient that is the negation of the coefficient used for the column update. When a column in is subtracted from column , which itself has not been used to reduce any other column, its corresponding row contains a single diagonal element.
Algorithm 5 satisfies this property (which is why it produces a lazy reduction): when an update is recorded in matrix , the column that is being updated has not yet been used to reduce any other column. Therefore, off the diagonal, the corresponding matrix is the negation of matrix – compare the coefficients in Algorithm 5 with in Algorithm 4. On the diagonal, both and are all ones. So adding to recovers .
Remark 15.
As is an upper triangular matrix, by Theorem 7, it can be inverted in -time as required.
Running Time Analysis.
The base case takes time as recording the permutation and zeroing out operations each take linear time. Hence the base case takes time. For the general case, we need to first apply the column permutations. Since we are permuting at most an (or ) matrix, applying the permutation (and undoing it) takes time by Lemma 5. Next, applying the row updates from to , assuming , requires multiplying , a matrix, with , a matrix – taking time by Lemma 4. To complete the update, the subtractions take time. To compute the column updates, we must multiply , an matrix with , a matrix, which again takes time. Putting these together, we find that the full step can be completed in time and as in the column algorithm case, solving the recursion gives an running time for and a running time for .
6 Discussion
In this paper, we have shown that in the case of persistent (co)homology, the standard bases returned by the exhaustive and lazy reductions can be computed in matrix multiplication time. It is worth noting that while both the columns and row algorithms utilize batch updates, they are qualitatively different: (1) Only the column algorithm critically relies on fast matrix inversion; (2) The row algorithm update sequence is substantially more complex but requires only column permutations. While one can use the column algorithm to compute the lazy basis and the row algorithm for the exhaustive basis in the same asymptotic running time, the resulting algorithms are both complex and not particularly illuminating. There are also numerous technical complications to deal with, so we omit them from this paper. Another outstanding issue is whether it is possible to transform the exhaustive representatives to lazy representatives and vice versa without redoing the reduction. Again, we believe this possible, but omit the details due to space and because the resulting asymptotic running time is the same. Finally, we conclude with an open question: what is the optimal running time for computing representatives in zigzag persistence. Recently, this has been partially addressed in [12, 14] but optimality in full generality remains open.
References
- [1] Ulrich Bauer. Ripser: efficient computation of vietoris–rips persistence barcodes. Journal of Applied and Computational Topology, 5(3):391–423, 2021. doi:10.1007/S41468-021-00071-5.
- [2] Ulrich Bauer, Michael Kerber, and Jan Reininghaus. Clear and compress: Computing persistent homology in chunks. In Topological Methods in Data Analysis and Visualization III: Theory, Algorithms, and Applications, pages 103–117. Springer, 2014. doi:10.1007/978-3-319-04099-8_7.
- [3] Ulrich Bauer, Michael Kerber, Jan Reininghaus, and Hubert Wagner. Phat–persistent homology algorithms toolbox. Journal of symbolic computation, 78:76–90, 2017. doi:10.1016/J.JSC.2016.03.008.
- [4] Oleksiy Busaryev, Sergio Cabello, Chao Chen, Tamal K Dey, and Yusu Wang. Annotating simplices with a homology basis and its applications. In Scandinavian workshop on algorithm theory, pages 189–200. Springer, 2012. doi:10.1007/978-3-642-31155-0_17.
- [5] Gunnar Carlsson and Vin de Silva. Zigzag persistence. Foundations of computational mathematics, 10(4):367–405, August 2010. doi:10.1007/s10208-010-9066-0.
- [6] Gunnar Carlsson, Vin de Silva, and Dmitriy Morozov. Zigzag persistent homology and real-valued functions. In Proceedings of the Twenty-fifth Annual Symposium on Computational Geometry, SCG ’09, pages 247–256, New York, NY, USA, 2009. ACM. doi:10.1145/1542362.1542408.
- [7] Chao Chen and Michael Kerber. An output-sensitive algorithm for persistent homology. In Proceedings of the twenty-seventh annual symposium on Computational geometry, pages 207–216, 2011. doi:10.1145/1998196.1998228.
- [8] David Cohen-Steiner, Herbert Edelsbrunner, and Dmitriy Morozov. Vines and vineyards by updating persistence in linear time. In Proceedings of the twenty-second annual symposium on Computational geometry, pages 119–126, 2006. doi:10.1145/1137856.1137877.
- [9] David Cohen-Steiner, André Lieutier, and Julien Vuillamy. Lexicographic optimal homologous chains and applications to point cloud triangulations. Discrete & computational geometry, 68(4):1155–1174, December 2022. doi:10.1007/s00454-022-00432-6.
- [10] Vin De Silva, Dmitriy Morozov, and Mikael Vejdemo-Johansson. Dualities in persistent (co) homology. Inverse Problems, 27(12):124003, 2011.
- [11] Vin de Silva, Dmitriy Morozov, and Mikael Vejdemo-Johansson. Persistent cohomology and circular coordinates. Discrete & computational geometry, 45(4):737–759, June 2011. doi:10.1007/s00454-011-9344-x.
- [12] Tamal K Dey and Tao Hou. Updating barcodes and representatives for zigzag persistence. arXiv preprint arXiv:2112.02352, 2021.
- [13] Tamal K Dey and Tao Hou. Fast computation of zigzag persistence. In 30th Annual European Symposium on Algorithms (ESA 2022). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022.
- [14] Tamal K Dey, Tao Hou, and Dmitriy Morozov. A fast algorithm for computing zigzag representatives. arXiv preprint arXiv:2410.20565, 2024. doi:10.48550/arXiv.2410.20565.
- [15] H. Edelsbrunner, D. Letscher, and A. Zomorodian. Topological persistence and simplification. Discrete & computational geometry, 28:511–533, 2002. doi:10.1007/S00454-002-2885-2.
- [16] Herbert Edelsbrunner and John L Harer. Computational topology: an introduction. American Mathematical Society, 2022.
- [17] G. Henselman and R. Ghrist. Matroid Filtrations and Computational Persistent Homology. ArXiv e-prints, June 2016. arXiv:1606.00199.
- [18] Alan Hylton, Greg Henselman-Petrusek, Janche Sang, and Robert Short. Performance enhancement of a computational persistent homology package. In 2017 IEEE 36th international performance computing and communications conference (IPCCC), pages 1–8. IEEE, 2017. doi:10.1109/PCCC.2017.8280468.
- [19] Gil Kalai. Enumeration of q-acyclic simplicial complexes. Israel Journal of Mathematics, 45:337–351, 1983.
- [20] Michael Kerber, Donald R Sheehy, and Primoz Skraba. Persistent homology and nested dissection. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1234–1245. SIAM, 2016. doi:10.1137/1.9781611974331.CH86.
- [21] Nikola Milosavljević, Dmitriy Morozov, and Primoz Skraba. Zigzag persistent homology in matrix multiplication time. In Proceedings of the twenty-seventh Annual Symposium on Computational Geometry, pages 216–225, 2011. doi:10.1145/1998196.1998229.
- [22] Dmitriy Morozov. Persistence algorithm takes cubic time in worst case. BioGeometry News, Dept. Comput. Sci., Duke Univ, 2, 2005.
- [23] Dmitriy Morozov and Primoz Skraba. Persistent (co)homology in matrix multiplication time, 2024. doi:10.48550/arXiv.2412.02591.
- [24] Arnur Nigmetov and Dmitriy Morozov. Topological optimization with big steps. Discrete & computational geometry, 72(1):310–344, July 2024. doi:10.1007/s00454-023-00613-x.
- [25] Primoz Skraba, Gugan Thoppe, and D Yogeshwaran. Randomly weighted -complexes: Minimal spanning acycles and persistence diagrams. The Electronic Journal of Combinatorics, pages P2–11, 2020.
- [26] Hubert Wagner, Chao Chen, and Erald Vuçini. Efficient computation of persistent homology for cubical data. In Topological methods in data analysis and visualization II: theory, algorithms, and applications, pages 91–106. Springer, 2011.
- [27] Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. In Proceedings of the twentieth annual symposium on Computational geometry, pages 347–356, 2004. doi:10.1145/997817.997870.