Decomposing Multiparameter Persistence Modules
Abstract
Dey and Xin (J.Appl.Comput.Top., 2022) describe an algorithm to decompose finitely presented multiparameter persistence modules using a matrix reduction algorithm. Their algorithm only works for modules whose generators and relations are distinctly graded. We extend their approach to work on all finitely presented modules and introduce several improvements that lead to significant speed-ups in practice. Our algorithm is fixed-parameter tractable with respect to the maximal number of relations of the same degree and with further optimisation we obtain an time algorithm for interval-decomposable modules. In particular, we can decide interval-decomposability in this time. As a by-product to the proofs of correctness we develop a theory of parameter restriction for persistence modules. Our algorithm is implemented as a software library aida, the first to enable the decomposition of large inputs. We show its capabilities via extensive experimental evaluation.
Keywords and phrases:
Topological Data Analysis, Multiparameter Persistence Modules, Persistence, DecompositionFunding:
Tamal K. Dey: Supported by NSF grants NSF grants CCF-2437030 and DMS-2301360.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Mathematics of computing Algebraic topologySupplementary Material:
Software (Algorithm): https://github.com/JanJend/AIDA [38]archived at

archived at

Acknowledgements:
The second author wants to thank Klaus Lux for his insights and discussions about the Meat-Axe algorithm and Algorithmic Representation Theory, as well as Martin Kalck for pointers to relevant established theory.Editors:
Oswin Aichholzer and Haitao WangSeries and Publisher:

1 Introduction

Multiparameter Persistence Modules (MPM) as introduced by Carlsson and Zomorodian [21, 22] make it possible to capture topological features of data with respect to more than one parameter. This, for example, allows detection of features in different scale-regions stably with respect to outliers [13]. One of the main challenges of dealing with MPM is that their indecomposable summands do not provide a discrete invariant, as in the case of finitely generated -parameter persistence modules, where this decomposition leads to the barcode.
In contrast, indecomposable MPM can be arbitrarily complex because the representation theory of the partially ordered set is wild ([45, 49]). In light of this difficulty, researchers in TDA have come up with various invariants with varying degree of discrimination powers and ease of computability ([16, 24, 27, 32, 48, 51, 52, 58, 60]). Nevertheless, the begging question remains how to compute the indecomposable summands of MPM.
A fast algorithm for this task has numerous benefits. Primarily, every additive invariant of persistence module can, by definition, be computed from its indecomposable summands, possibly reducing computation time. For example, the computation of the matching distance [10, 12] becomes more tractable for decomposed modules as the complexity of the barcode template [46] reduces, which we show empirically in Section 7.
Contributions.
We present an algorithm inspired by the Dey-Xin algorithm [29], speeding up various steps and lifting the assumption that the module needs to be distinctly graded. Our methods are collected in the software package aida (Figure 1), which allows the decomposition of presentations arising from large data within seconds. We demonstrate the usefulness of our improvements in extensive experimental evaluation. Since the methods used in aida are of more general interest we implemented them in a stand-alone, header-only C++library Persistence-Algebra. Both this library and the aida code are publicly available.111https://github.com/JanJend/AIDA and https://github.com/JanJend/Persistence-Algebra
While the distinctly graded case allows for an incremental column-by-column approach to compute a decompositon, the general case requires to consider all columns of the same degree at once, say many at the -th step. It does not suffice to consider all permutations of these columns [5]; instead arbitrary changes-of-base within the columns could be part of a correct solution. Our first approach requires a brute-force iteration over these change-of-basis matrices (which is only possible over a finite field) and is therefore not polynomial in . Nevertheless, through a careful analysis, we limit the number of cases and provide a tractable solution if is small. Since we can then use the poly-time strategy of [29], this yields a decomposition algorithm which is fixed-parameter tractable with respect to .
To further reduce the iteration – or even avoid it altogether – we analyse which kinds of matrix operations can actually help in the decomposition. This leads to another FPT-algorithm in a parameter smaller than which promises to accelerate the computation for modules where is too large. A modification of this idea also yields an algorithm which, for interval-decomposable modules, runs in time, independent of .
Related work.
Multi-parameter persistent homology is an active research area with many theoretic, computational, and applied results in recent years [15]. In particular, efficient algorithms have been devised for the computation of a presentation of standard bifiltrations [4, 2, 17, 18, 25, 44], for minimizing presentations [29, 33, 47] and for efficient distance computation [10, 12, 28]. Our decomposition algorithm complements this pipeline.
In theory we can decompose any module over a finite dimensional algebra in polynomial time by decomposing its endomorphism algebra [23], but this algorithm is not practical. The algorithms in the C MeatAxe Package ([50], [57]), in the MAGMA system [14], and in the GAP system [34], [54], rely on the Holt-Rees extension [37] of the MeatAxe algorithm [53], both of which are Las Vegas algorithms and too slow for the large sparse matrices appearing in TDA. We remark that that the run-time complexity of these algorithms is sometimes not precisely cited in the TDA literature. For example the bounds in [15, Section 8.5] and [29, Introduction]) are a misinterpretation of the usage of the MeatAxe algorithm in this pipeline and use the lower bound given in [36] for the non-practical algorithm described in [55].
The only algorithm specifically designed for MPM is the aforementioned Dey-Xin algorithm [29] which has a time complexity of , where is the matrix multiplication constant. It only works for uniquely graded modules, a property that is satisfied by some, but not all modules that appear in practice. Moreover, an algorithm to compute graphcodes [43] yields a partial decomposition
It is well known that decompositions are not stable; every persistence module is even arbitrarily close to an indecomposable one in interleaving distance [9], but geometric bifiltrations typically do not create such pathologies. Moreover, Bjerkevik [11, Thm 5.4] constructs a module which captures all decompositions of an entire -interleaving-neighbourhood. His construction creates many generators and relations of the same degree so its decomposition requires an efficient algorithm we provide with AIDA.
Complicated indecomposables can arise already in relatively simple geometric examples [20, 19] and most modules in practice are not interval-decomposable (e.g., most examples from our benchmark set contain non-interval indecomposables; see also [3]). However, we expect that in practice most modules will mainly consist of small or structurally simple indecomposables (see [1]), and we designed our algorithm to work fast in such scenarios.
Outline.
We review persistence modules and their presentations in Section 2. The algorithm by Dey and Xin [29], which is the basis of our work, is reviewed and modified in Sections 3 and 4. In Section 5, we describe how to decompose non-distinctly graded modules with an exhaustive approach. In Section 6, we present an algorithm that reduces the potentially restrictive iteration of the exhaustive algorithm. The detailed descriptions of our algorithms and proofs of correctness are deferred to the extended arXiv version [26].
2 Persistence Modules and their Presentations
Consider a simplicial complex , discretely filtered with respect to many parameters. That is for each there is a subcomplex and for each pair there is an inclusion . Applying the simplicial homology functor , for a field results in a commutative diagram of -vector spaces over . For , it has the following shape:
The maps encode how classes in persist when parameters are increased.
Definition 1.
A -parameter persistence module over is a commutative diagram of -vector spaces over .
Presentations.
A persistence module can be succinctly represented by a presentation. This is a set of generators and a set of relations among these generators. If these sets are finite, they form a presentation matrix, where the row indices correspond to the generators, the column indices to the relations and an entry at tells us which scalar coefficient in the -th generator appears with in the -th relation.
Definition 2.
A graded matrix is a matrix together with functions and which decorate the rows and columns with degrees such that
Given two functions we write for the corresponding space of graded matrices.
Definition 3.
A finite presentation is called minimal if there is no presentation of the same module with less generators or less relations.
Remark 4.
Every finitely generated () persistence module has a finite presentation (Hilbert’s syzygy theorem [35]). Furthermore, there are efficient algorithms to compute minimal presentations of persistence modules from simplicial bi-filtrations ([8, 42, 47]) and Schreyer’s algorithm ([31]) for the general case.
Theorem 5 (Krull-Remak-Schmidt-Azumaya [6]).
Every fg persistence module has a finite decomposition into indecomposable submodules, , and the isomorphism types of the submodules are independent of the decomposition.
Decompositions of modules correspond to decompositions of their presentations, but this correspondence is only surjective, see [26, Example 2.30] for details.
Definition 6.
Let be a matrix. A block is a pair where and we write for the corresponding submatrix. is called (block-) decomposed if there is a set of blocks which partition the row- and column-indices such that M is zero outside of the blocks. We write for a matrix which is block-decomposed and if the matrix is graded, then , denote the corresponding degrees of .
Example 7.
A matrix decomposed over
Definition 8 ([29, Section 4.1] ).
For , an elementary row- (column-) operation from index to index is called admissible if ().
Admissible operations preserve the isomorphism type of the presented module.
Proposition 9 ([26, Sec.2.1]).
If , minimally present isomorphic modules, then there is a sequence of admissible operations transforming into . Equivalently, there are invertible graded matrices such that .
In particular any decomposition of a module lifts to a change-of-basis between a presentation matrix and a block-decomposed presentation matrix.
Remark 10.
9 points to the main difficulty in writing a matrix-elimination type algorithm. Assume that we want to bring a presentation into a certain shape, i.e. with zeroes in a block . This is equivalent to finding invertible graded matrices such that . A priori this is a quadratic system of equations in the entries of and .
3 The Generalized Persistence Algorithm
The Generalized Persistence (GP) Algorithm introduced by Dey and Xin [29] finds an indecomposable decomposition of a presentation matrix assuming that all relations have pairwise distinct degrees. We review the core functionality of their algorithm.
The GP-Algorithm loops over the relations in an order compatible with the product order on , similarly to the Persistence Algorithm for 1-parameter persistence modules [30, 7]. It manipulates the matrix via admissable row and column operations and maintains the invariant that after having handled the -th column, the matrix consisting of the first columns is block-decomposed into indecomposables. We must therefore decompose a graded matrix that is already decomposed except for its last column. This is reflected in the next definition; note that the GP algorithm assumes to consist of a single column, but we already define the structure for of arbitrary size to prepare our extension of the algorithm.
Definition 11.
Let and be a presentation, where contains columns, all of degree . It is called -decomposed, if it is minimal, is an indecomposable decomposition, and the degree of no column of is larger than . For we denote by the submatrix of with the row indices of .
BlockReduce.
If we can decompose -decomposed presentations, then by iterating over the relations we can decompose all finite presentations. If has only one column this means we have to zero out every sub-column of whenever possible to reach the block-decomposition with the most blocks. This is done by a subroutine which Dey and Xin called BlockReduce, a name we will keep.
Example 12.
We illustrate BlockReduce on the example given in Figure 3. Let us call the (blue) upper block and the lower (red) block c. We will use two types of admissible operations to try to zero out :
-
(a)
Column operations from to , e.g. adding the first column to the fifth column.
-
(b)
Row operations from to , e.g. adding the fourth row to the second row.
Adding the third row to the first would delete all entries in , but this would also destroy the block-decomposition of , placing a at position :
However, if we also add the fourth to the second row and then use the two column operations indicated by the (green) arrows (right side), we can revert the alteration of .
We thus need a third type of admissible operations: (c) Column operations from to .
The operations of type , , and have a pleasant feature: their effect on the matrix does not change if the order of their application changes. Dey-Xin call such sets of operations independent. In the setting of 10, when restricting the entries of and to those belonging to independent operations, the system of equations becomes linear.
Dey and Xin observed that following this strategy, repeated application of the operations (a)–(c) actually suffices to decompose an -decomposed presentation with , but assumed that the generators are also uniquely graded. We prove a more general statement, which therefore also lifts this assumption from the GP Algorithm.
Lemma 13 ([26, Lemma 5.10]).
Let be -decomposed, then after a suitable sequence of elementary column operations on , there is a sequence of independent admissible operations which finds an indecomposable decomposition of without altering .
At last assume again that . After having zeroed out as much as possible of , the blocks for which are still non-zero must all merge into a new block, since their non-zero entries now overlap in this last column. This finishes the loop of the GP Algorithm.
Extending BlockReduce.
Consider an -decomposed presentation and let now be arbitrary, where we recall that is the number of columns of . At first, we are still interested in deleting the entire sub-matrices for every wherever this is possible. Consider now sets of admissible operations of the types - in 12, encoded as graded matrices. Using , their effect on can be described:
-
(a)
A graded matrix and .
-
(b)
A graded matrix and .
-
(c)
A graded matrix and .
Finding such that after performing and the matrix has not changed is equivalent to demanding that , which is a linear system (). Finding such that performing , zeroes out is another one ().
BlockReduce (General Version).
Input: , .
(1) |
In [29], Dey and Xin write these equations in a large matrix which they call -matrix.
Run time of BlockReduce.
4 Modifications
aida implements many modifications to BlockReduce which make this subroutine much faster and enable the Automorphism-invariant -decomposition in Section 6.
Column-Sweeps.
When processing a new set of columns of degreee , we first try to zero out each only with column-operations (i.e., ignoring row-operations first), which is often sufficient and, if not, will maybe reduce the number of non-zero entries. To this end we reduce the sub-matrix of containing the columns of degree . Importantly, this matrix can be stored and sometimes even updated when the next set of columns is considered in the same way as in the standard persistence algorithm for one parameter.
BlockReduce (Hom-Variant).
The following change is subtle but crucial:
-
1.
Find a basis of the solution-space of () for every and store it.
-
2.
Search for the solution of () in the vector spaces spanned by for each .
This split makes no computational difference asymptotically, and its overhead is negligible. In subsequent algorithms, we will call BlockReduce recursively without changing (thanks to Lemma 13). By solving and storing (), we thus avoid recomputations during the recursion.
Morphisms of Presentations.
Let present persistence modules . Up to sign, the solutions of BlockReduce () form the -vector space
and there is a natural -linear surjection .
Fast -computation.
Let be persistence modules, then can be computed faster if is pointwise “small”. This is because we can restrict the target space of each generator of . We will explore this thoroughly in future research and also point to the following known result for “smallest” modules, also due to Dey and Xin.
Proposition 15 ([28], Proposition 16).
Let and be interval modules with generators and relations, then can be computed in time.
Furthermore, not all homomorphisms are needed to find a decomposition.
Observation 16.
The map of persistence modules induces a linear map of vector spaces . If this map is the zero map then, since is generated at , is a linear combination of the columns of (cf. Column-Sweeps).
Definition 17.
For persistence modules we define the space of -homomorphisms
For presentations one defines analogously.
Thus, instead of finding a basis for the whole solution space of BlockReduce () we actually only need to compute a set of solutions which span . This is a simple reduction based on 16 and will decrease the size of the system (), but more importantly for Section 6 it will sometimes reduce this solution space to zero.
Interval Modules.
For interval modules we can describe this reduction precisely. By [28, Proposition 16] the vector space has a basis consisting of “valid” overlaps:
At most one of the valid overlaps can contain and so . Define a relation . This relation is a partial order on any set of isomorphism classes of interval modules ([26, Lemma 6.16]). Additionally, an interval module can have at most dimension at . Therefore, when we consider a set of new relations in the algorithm, if presents an interval module, we can reduce every sub-column to have a single non-zero entry. Together with the transitivity of -maps, this observation allows Gaussian (row-)elimination on avoiding BlockReduce () ( 6, [26, Algorithm B.1]).
5 Decomposing General Presentations
One might hope that we could process the columns of in some order and just use BlockReduce. Unfortunately this does not work: Counterexamples are [5, Example 5.14] and also [26, Example 5.2] where we construct a 2-parameter module over which is smallest for presentations over which cannot be decomposed with this method. If on the other hand we allow column operations which do not follow some order, the set of blocks for which can be partially deleted is no long unique up to permutation [26, Example 5.3].
Exhaustive Decomposition.
One immediate difference to the distinctly graded case is that a decomposition might introduce several new blocks. Instead of finding all of them at once, we will try to split the presentation into two summands and use recursion. We invoke 13 again to find invertible matrices, which leave unchanged and such that
It follows that there is an invertible matrix and a partition of the columns of , which allow us to decompose the whole presentation by zeroeing out submatrices with BlockReduce: After multiplying with from the right, for every block we can delete either or . Here we see that we would have to solve () again, but since we stored its solution this is not necessary and () is efficiently solved. Since we work over a finite field, this immediately implies a brute-force algorithm:
Exhaustive -decomposition.
For each invertible matrix over and set , and decompose each separately with BlockReduce.
This is computationally unfeasible even for smaller : The maximal number of iterations of this loop for and are and more generally . Fortunately, we can be much smarter about the matrices and partitions we use: Assume that is as above so that decomposes into two parts, supported on and , after using BlockReduce (). The two parts also induce a partition of . If are matrices with the same column-spaces as respectively then could be brought to the same decomposition with BlockReduce. That is, finding a decomposition with this strategy depends only on the decomposition of as a vector space which represents. Additionally, once we have deleted as many parts of as possible, we can update the block structure and do the same again for , but now also being able to use column-operations from the columns of to those of without breaking linearity of the system. That is, we also do not need to consider a decomposition , if a decomposition has been considered already that is of the form , for any matrix and any with the same column-space as . But this is the case for every other invertible matrix of the form , so that we only need to iterate over all proper subspaces of , i.e. the Grassmanian for every . These matrices can be generated fast by leveraging the Schubert cell decomposition [56]:
Algorithm Generate .
For each set construct a matrix in column-echelon form with pivots given by . Then construct all subspaces in the corresponding cell by changing the entries that are irrelevant for the echelon form and not in the same row as a pivot. For each generated this way, choose any with opposite pivots and store the pairs in a set .
Avoiding Redundant Counting.
Since exchanging and in would not make a difference for the decomposition we can actually ignore the sets for . In the case where is even and we can avoid double counting even more. Consider the graph on where if they form a decomposition (also called the -Kneser graph). By symmetry we only need to find a minimal vertex cover of this graph and use these subspaces for the computation above. This can be done by considering in Generate only those pivot sets which contain the first position. Let , then we can iterate over instead of for the exhaustive decomposition. The first terms of are and the asymptotic growth is in .
At last, when calling the recursion on a sub-matrix we can pull back decompositions we have already tried to the corresponding sub-space to avoid double counting again.
Theorem 18.
Let be a minimal presentation of a persistence module over with generators and relations and at most relations of the same degree. Then the exhaustive decomposition algorithm has a worst-case running time in .
Proof.
6 Automorphism-invariant -decomposition
While exhaustive -decomposition will comfortably work only as long as, say, , we will encounter presentations where this approach is unsustainable. We will see in the next example that iteration over subspaces as in Section 5 can often be avoided.
Example 19.
Consider the -decomposed presentation extending 7. We calculate: , , and and all other -sets are trivial. We depict this information in a digraph structure on .
There are no maps to and so if we want to change then we can only use admissible column operations. If we consider the last row as a presentation itself, it is not decomposed maximally: The two s form a block, which can be made smaller by adding the 4th to the 3rd column. That is, we are minimising this sub-presentation, reaching the following presentation on the left and the last column index must become a part of the block .
Next we turn to the second block. The subbatch has a part sitting above the cleared part of and a part in the new column of . Now if we were again to add the 4th column to the 3rd, then we could not delete the 1 that would appear at with any row-operations, because there are no maps to ! So the non-existence of maps to a block restricts the column operations on away from the block.
This implies that we can not use any operations from outside of to decompose and since we cannot reduce further, the 3rd column must become part of the block . Then can be zeroed out using again BlockReduce () and we reach the presentation above on the right. Next we try to reduce and start with the part in the new column of , because the column-operation from fourth to third is still impossible. We cannot zero out with admissible operations, so the two blocks and need to merge:
We update the spaces to the new blocks and can continue with .
Extensions concentrated at .
We have seen in 19, that if is acyclic with respect to , it is possible to process in a way so that all admissible operations we need to consider only flow in the direction of the graph. Consequently there is no iteration over subspaces needed. In particular the algorithm would then also work for any field , not only a finite one. Let us formalise the structure of the matrix we need to decompose:
Definition 20.
An -decomposed presentation
is called
-decomposed extension if and
are minimal indecomposable decompositions.
We denote by the number of columns of .
When is acyclic, we have seen in 19 that and contain just a single block and there are no (-)maps from to . We argued that in this case there are also no column operations from to the left which need to be considered and so we can use BlockReduce () (on the extended presentations) to decide if can be zeroed out.
Remark 21.
Let be the persistence modules presented by the two submatrices and in 20 and the module presented by the entire matrix . The inclusion of into induces a short exact sequence . To this sequence corresponds a cocycle in which is exactly given by .
The correctness of the algorithm follows from our main theorem by lifting isomorphisms:
Theorem 22.
Let be induced by an -decomposed extension. If and is decomposable, then so is the entire short exact sequence.
Proof.
[26, Theorem 7.25].
Notice that the condition means that is almost invariant under the action of : No automorphism could alter and therefore also not the cocycle .
What if is not acyclic?
In general there is no order in which we can traverse the blocks, such that maps to the current block come only from parts of the matrix which already form an indecomposable decomposition. The solution here is to contract all strongly connected components of forming the condensation of this digraph.
The -decomposed extension we need to decompose in each step may then have many blocks in and and thus cannot be reduced by a linear system anymore. We employ another algorithm ExtensionDecomp ([26, Algorithm B.4]) which again loops over subspaces, but of , the column-space of . The resulting algorithm is FPT in , the largest occurrence of , and called the Automorphism-invariant Iterative Decomposition Algorithm.
Interval-Decomposables.
Assume that the input module is interval-decomposable. Then in Section 4 we have seen that is transitive and each can be assumed to be a single entry. Also, for every -decomposed extension we need to reduce, the matrices and must each present a set of isomorphic intervals. Then all row and column operations on are possible and we can use normal matrix reduction ([26, Algorithm B.1]).
Theorem 23.
The method outlined above decomposes interval decomposable modules in time where is the number of generators and relations.
7 Experiments
We have implemented our decomposition algorithm in a stand-alone C++ library aida. aida takes as input a minimal presentation in the form of a scc2020 file222https://bitbucket.org/mkerber/chain_complex_format/src/master/ which can be generated, for instance, using the mpfree library [42]. All experiments were performed on a workstation with an Intel(R) Xeon(R) CPU E5-1650 v3 CPU (6 cores, 12 threads, 3.5GHz) and 64 GB RAM, running GNU/Linux (Ubuntu 20.04.2) and gcc 9.4.0.
Test instances.
We considered a large collection of data files [41] including chain complex data generated by various bifiltrations, including function-Rips, lower-star bifiltrations, and multi-covers – see [33]. Also included are chain complexes of function-alpha bifiltrations for various choices of point clouds and density functions as described in the paper of Alonso et al. [4]. We additionally generated test cases which are not based on bifiltration data by simple random processes that can create interval-decomposable or random presentation inspired by the Erdös-Renyi-model of random graphs ([26, Appendix A]). In total, the benchmark set contains around 1300 different minimal presentation files. Our benchmarks scripts, the version of aida that we have used, and the results of our experiments are available at [40].
Most instances in the above collection are distinctly graded, that is . This is perhaps surprising because, for instance, function-alpha bifiltrations are generally not distinctly graded simplex-wise. Yet apparently their homology classes have unique degrees. A few instances also have or , but the only instances with large values of are the one coming from multi-cover bilfiltrations as described in [25].
Effectiveness of the speed-ups.
We implemented our code such that some optimizations can be disabled to gauge their importance. Our first finding is that the column sweep optimization as described at the beginning of Section 3 has the most visible effect on practical performance. In Table 1, we display the running time for some representative instances. We observe that the improvement factor depends on the instance type and even varies a lot for instances of the same type (as the third and fourth row of the table indicate). However, the improvement was consistent over all considered examples.
type | #Gens | #Rels | vanilla | +sweep | +homset |
---|---|---|---|---|---|
fun-alpha (kde) | 10000 | 14014 | 118 | 0.39 | 0.42 |
fun-alpha (random) | 6460 | 11154 | 7.64 | 0.37 | 0.33 |
random | 800 | 782 | 6.85 | 2.12 | 3.31 |
random | 800 | 776 | 54.0 | 2.54 | 3.03 |
off (hand) | 595 | 596 | 508 | 0.96 | 0.02 |
off (eros) | 2819 | 2820 | >3600 | 22.7 | 0.40 |
off (dragon) | 8229 | 8230 | >3600 | 833 | 9.4 |
off (raptor) | 7203 | 7186 | >3600 | 1588 | 32.7 |
We next consider the effect of the optimized computation of the system (). In many instances, this technique does not speed up the computation or even slightly slows it down, due to the technical overhead of solving () first. However, the slow-down was never more than a factor of in the experiments. On the other hand, for some instances, a speed-up is achieved; most significantly, this is the case for the off-datasets, which are triangular meshes in 3D bi-filtered along two of the coordinate axis (bottom half of Table 1). This happens because this filtration tends to create the large but pointwise low-dimensional summands which we have optimized the computation for.
Empirical complexity.
We demonstrate in Table 2 that aida is able to decompose presentations with hundreds of thousands of generators and relations within seconds in many practical cases. This is also true for non-distinctly graded cases: the bottom half of the table displays multi-cover instances where is larger than ( on the left, on the right of the table). We also display the progression of aida’s time and memory consumption when the input size (roughly) doubles. We can see that in some cases, aida scales almost linearly with the size of the input presentation (the upper-left and lower-right part of the table) whereas in other cases the time grows rather quadratically (the upper-right and lower-left part). We also observed instances with a super-quadratic growth in our experiments; nevertheless, the empirical behavior is much better than what the asymptotic bound predicts. This is not surprising for realistic sparse input matrices, in analogy to the persistence algorithm for one parameter, because matrix reduction is generally faster. Also the summands typically stay small, so that () can be solved fast.
dim | #Gens | #Rels | time | mem | dim | #Gens | #Rels | time | mem |
---|---|---|---|---|---|---|---|---|---|
0 | 5000 | 7431 | 0.11 | 17 MB | 1 | 12410 | 19890 | 22.3 | 108 MB |
10000 | 14701 | 0.22 | 29 MB | 25681 | 40288 | 80.3 | 277 MB | ||
20000 | 29030 | 0.47 | 53 MB | 53250 | 81093 | 562 | 928 MB | ||
40000 | 57691 | 1.23 | 101 MB | 112229 | 165769 | 1952 | 1.55 GB | ||
2 | 15707 | 31414 | 0.47 | 50 MB | 1 | 6856 | 13711 | 0.12 | 23 MB |
32251 | 64503 | 1.25 | 97 MB | 14570 | 29140 | 0.27 | 46 MB | ||
71166 | 142332 | 6.36 | 209 MB | 33303 | 66606 | 0.76 | 99 MB | ||
154400 | 308800 | 17.6 | 447 MB | 74462 | 148924 | 1.95 | 218 MB |
Comparison with other software.
We are not aware of any other implementation for the special case of multi-graded persistence modules, but there exist some libraries that include a decomposition algorithm for modules over finite dimensional algebras, for example the algorithm by Lux and Szöke [50] in the C-MeatAxe library. Since using this algorithm would require too many preprocessing steps, we instead used the decomposition algorithm for quiver representations in the QPA pacakge [54] for GAP [34] and implemented (also in the aida package) an algorithm to convert MPM to quiver representations. Our tests show that the QPA algorithm already needs several minutes for presentations with around generators and relations and times out for all presentations we have considered for testing aida.
Speeding up barcode template computation.
We demonstrate how decomposing a module can speed up computational tasks in multi-parameter persistence: barcode templates are a common tool to capture all combinatorial barcodes that arise from a -parameter module through restrictions to one parameter. Such a barcode template is computed via the arrangement of lines in the plane, arising from the joins of generators and relations of a module via point-line duality. It is evident that this construction is compatible with decomposition, meaning that if a module decomposes, the arrangements of and can be computed separately and then be overlaid.
We implemented a small test program that computes the barcode template using the arrangement package of Cgal [59] once for the undecomposed modules and once after applying aida. Table 3 shows that the arrangement size as well as the computation time drastically reduces when using aida. We remark that the variance of the aida arrangement in size was very large, leading to the effect that the average arrangment size even decreases in our sample. In either case, we speculate that incorporating aida would speed-up the visualization of persistence modules in Rivet and is a necessary preprocessing step in the exact computation of matching distances in practice.
With aida | Without aida | |||||||
---|---|---|---|---|---|---|---|---|
#points | #Gens | #Rels | #lines | V | time | #lines | V | time |
20 | 18.4 | 28 | 24.6 | 210 | 0.002 | 176 | 6639 | 0.034 |
40 | 59 | 97 | 144 | 6295 | 0.026 | 1877 | 692K | 3.50 |
60 | 99 | 166 | 382 | 39996 | 0.19 | 5096 | 5.076M | 27.1 |
80 | 137 | 224 | 346 | 33301 | 0.16 | 8801 | 14.3M | 81.8 |
8 Conclusion
aida is the first practical algorithm to decompose multi-parameter persistence modules. We emphasise that it works for any number of parameters (although all experiments were on parameters) and should be adaptable to any other poset or acyclic quiver.
Our improvements are a consequence of a better understanding of MPM on the algebraic level. More speed-ups are possible; for instance, the faster computation of that we employ is not the best possible and we plan to improve it further. We also believe that the algorithm from Section 6 should show improvement over the exhaustive algorithm of Section 5 for sparse input with larger and non-sparse input also for smaller .
References
- [1] Ángel Javier Alonso and Michael Kerber. Decomposition of zero-dimensional persistence modules via rooted subsets. In International Symposium on Computational Geometry (SoCG), 2023. doi:10.4230/LIPICS.SOCG.2023.7.
- [2] Ángel Javier Alonso, Michael Kerber, and Siddharth Pritam. Filtration-domination in bifiltered graphs. In Algorithm Engineering and Experiments (ALENEX), 2023. doi:10.1137/1.9781611977561.CH3.
- [3] Ángel Javier Alonso, Michael Kerber, and Primoz Skraba. Probabilistic analysis of multiparameter persistence decompositions into intervals. In International Symposium on Computational Geometry (SoCG), 2024. doi:10.4230/LIPIcs.SoCG.2024.6.
- [4] Ángel Javier Alonso, Michael Kerber, Tung Lam, and Michael Lesnick. Delaunay bifiltrations of functions on point clouds. In Symposium on Discrete Algorithms (SODA), 2024. doi:10.1137/1.9781611977912.173.
- [5] Hideto Asashiba, Mickaël Buchet, Emerson G. Escolar, Ken Nakashima, and Michio Yoshiwaki. On interval decomposability of 2d persistence modules. Computational Geometry, 105-106:101879, 2022. doi:10.1016/j.comgeo.2022.101879.
- [6] Gorô Azumaya. Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt’s theorem. Nagoya Mathematical Journal, 1:117–124, 1950.
- [7] Ulrich Bauer, Michael Kerber, Jan Reininghaus, and Hubert Wagner. Phat–persistent homology algorithms toolbox. Journal of Symbolic Computation, 78:76–90, 2017. doi:10.1016/J.JSC.2016.03.008.
- [8] Ulrich Bauer, Fabian Lenzen, and Michael Lesnick. Efficient Two-Parameter Persistence Computation via Cohomology. In International Symposium on Computational Geometry (SoCG), 2023. doi:10.4230/LIPIcs.SoCG.2023.15.
- [9] Ulrich Bauer and Luis Scoccola. Generic two-parameter persistence modules are nearly indecomposable, 2022. arXiv:2211.15306.
- [10] Silvia Biasotti, Andrea Cerri, Patrizio Frosini, and Daniela Giorgi. A new algorithm for computing the 2-dimensional matching distance between size functions. Pattern Recognition Letters, 32(14):1735–1746, 2011. doi:10.1016/J.PATREC.2011.07.014.
- [11] Håvard Bakke Bjerkevik. Stabilizing decomposition of multiparameter persistence modules. Foundations of Computational Mathematics, January 2025. doi:10.1007/s10208-025-09695-w.
- [12] Håvard Bakke Bjerkevik and Michael Kerber. Asymptotic improvements on the exact matching distance for 2-parameter persistence. Journal of Compututational Geometry, 14(1):309–342, 2023. doi:10.20382/JOCG.V14I1A12.
- [13] Andrew J. Blumberg and Michael Lesnick. Stability of 2-parameter persistent homology. Foundations of Computational Mathematics, 24(2):385–427, 2024. doi:10.1007/S10208-022-09576-6.
- [14] Wieb Bosma, John Cannon, and Catherine Playoust. The Magma algebra system. I. The user language. Journal of Symbolic Computation, 24(3-4):235–265, 1997. doi:10.1006/jsco.1996.0125.
- [15] Magnus Bakke Botnan and Michael Lesnick. An introduction to multiparameter persistence, 2023. arXiv:2203.14289.
- [16] Magnus Bakke Botnan, Steffen Oppermann, and Steve Oudot. Signed Barcodes for Multi-Parameter Persistence via Rank Decompositions. In International Symposium on Computational Geometry (SoCG), 2022. doi:10.4230/LIPIcs.SoCG.2022.19.
- [17] Mickaël Buchet, Bianca B. Dornelas, and Michael Kerber. Sparse higher order Čech filtrations. In International Symposium on Computational Geometry (SoCG), 2023. doi:10.4230/LIPICS.SOCG.2023.20.
- [18] Mickaël Buchet, Bianca B. Dornelas, and Michael Kerber. Sparse higher order Čech filtrations. Journal of the ACM, 71(4):28:1–28:23, 2024. doi:10.1145/3666085.
- [19] Mickaël Buchet and Emerson G. Escolar. Realizations of indecomposable persistence modules of arbitrarily large dimension. In International Symposium on Computational Geometry (SoCG), 2018. doi:10.4230/LIPICS.SOCG.2018.15.
- [20] Mickaël Buchet and Emerson G. Escolar. Realizations of indecomposable persistence modules of arbitrarily large dimension. Journal of Computational Geometry, 13(1):298–326, 2022. doi:10.20382/jocg.v13i1a12.
- [21] Gunnar Carlsson and Afra Zomorodian. The theory of multidimensional persistence. Discrete and Computational Geometry, 42:71–93, June 2007. doi:10.1007/s00454-009-9176-0.
- [22] Gunnar E. Carlsson and Afra Zomorodian. The theory of multidimensional persistence. In The ACM Symposium on Computational Geometry (SoCG), 2007. doi:10.1145/1247069.1247105.
- [23] Alexander Chistov, Gábor Ivanyos, and Marek Karpinski. Polynomial time algorithms for modules over finite dimensional algebras. In International Symposium on Symbolic and Algebraic Computation (ISSAC), 1997. doi:10.1145/258726.258751.
- [24] René Corbet, Ulderico Fugacci, Michael Kerber, Claudia Landi, and Bei Wang. A kernel for multi-parameter persistent homology. Computers & Graphics: X, 2, 2019. doi:10.1016/j.cagx.2019.100005.
- [25] René Corbet, Michael Kerber, Michael Lesnick, and Georg Osang. Computing the multicover bifiltration. In International Symposium on Computational Geometry (SoCG), 2021. doi:10.4230/LIPICS.SOCG.2021.27.
- [26] Tamal K. Dey, Jan Jendrysiak, and Michael Kerber. Decomposing multiparameter persistence modules. Arxiv report 2504.08119, 2025. doi:10.48550/arXiv.2504.08119.
- [27] Tamal K. Dey, Woojin Kim, and Facundo Mémoli. Computing generalized rank invariant for 2-parameter persistence modules via zigzag persistence and its applications. In International Symposium on Computational Geometry (SoCG), 2022. doi:10.4230/LIPICS.SOCG.2022.34.
- [28] Tamal K. Dey and Cheng Xin. Computing Bottleneck Distance for 2-D Interval Decomposable Modules. In International Symposium on Computational Geometry (SoCG), 2018. doi:10.4230/LIPIcs.SoCG.2018.32.
- [29] Tamal K. Dey and Cheng Xin. Generalized persistence algorithm for decomposing multiparameter persistence modules. Journal of Applied and Computational Topology, 6:271–322, 2022. doi:10.1007/s41468-022-00087-5.
- [30] Herbert Edelsbrunner and John Harer. Computational Topology: an Introduction. American Mathematical Society, 2010.
- [31] Burçin Eröcal, Oleksandr Motsak, Frank-Olaf Schreyer, and Andreas Steenpaß. Refined algorithms to compute syzygies. Journal of Symbolic Computation, 74:308–327, 2016. doi:10.1016/j.jsc.2015.07.004.
- [32] Marc Fersztand, Emile Jacquard, Vidit Nanda, and Ulrike Tillmann. Harder–Narasimhan filtrations of persistence modules. Transactions of the London Mathematical Society, 11(1), 2024. doi:10.1112/tlm3.70003.
- [33] Ulderico Fugacci, Michael Kerber, and Alexander Rolle. Compression for 2-parameter persistent homology. Computational Geometry, 109, 2023. doi:10.1016/j.comgeo.2022.101940.
- [34] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.13.1, 2024. URL: https://www.gap-system.org.
- [35] David Hilbert. Über die Theorie der algebraischen Formen. Mathematische Annalen, 36:473–534, 1890. URL: http://eudml.org/doc/157506.
- [36] Derek F. Holt. The meataxe as a tool in computational group theory. In The Atlas of Finite Groups - Ten Years On, pages 74–81. Cambridge University Press, 1998. doi:10.1017/CBO9780511565830.011.
- [37] Derek F. Holt and Sarah Rees. Testing modules for irreducibility. Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics, 57(1):1–16, 1994. doi:10.1017/S1446788700036016.
- [38] Jan Jendrysiak. AIDA. Software, version 0.2., FWF P 33765-N, swhId: swh:1:dir:c339cb432335e83758b23a807641b9c6a43525c8 (visited on 2025-05-30). URL: https://github.com/JanJend/AIDA, doi:10.4230/artifacts.23282.
- [39] Jan Jendrysiak. Persistence Algebra. Software, version 0.1., FWF P 33765-N, swhId: swh:1:dir:e41d23c6d743921e03d8a54e45ea3a4d36f7718b (visited on 2025-05-30). URL: https://github.com/JanJend/Persistence-Algebra, doi:10.4230/artifacts.23283.
- [40] Jan Jendrysiak and Michael Kerber. Aida benchmarks SoCG 2025 [data set]. Graz University of Technology, 2025. doi:10.3217/ag750-fq560.
- [41] Michael Kerber and Tung Lam. Benchmark data sets of minimal presentations of 2-parameter persistence modules [data set]. Graz University of Technology, 2025. doi:10.3217/rxedk-qyq77.
- [42] Michael Kerber and Alexander Rolle. Fast minimal presentations of bi-graded persistence modules. In Algorithm Engineering and Experiments (ALENEX), 2021. doi:10.1137/1.9781611976472.16.
- [43] Michael Kerber and Florian Russold. Representing two-parameter persistence modules via graphcodes, 2025. arXiv:2503.07368.
- [44] Michael Kerber and Matthias Söls. The localized union-of-balls bifiltration. In International Symposium on Computational Geometry (SoCG), 2023. doi:10.4230/LIPICS.SOCG.2023.45.
- [45] Mark Kleiner. Partially ordered sets of finite type. Journal of Soviet Mathematics, 3(5):607–615, May 1975. doi:10.1007/BF01084663.
- [46] Michael Lesnick and Matthew Wright. Interactive visualization of 2-d persistence modules, 2015. arXiv:1512.00180.
- [47] Michael Lesnick and Matthew Wright. Computing minimal presentations and bigraded Betti numbers of 2-parameter persistent homology. SIAM Journal on Applied Algebra and Geometry, 6(2):267–298, 2022. doi:10.1137/20M1388425.
- [48] David Loiseaux, Luis Scoccola, Mathieu Carrière, Magnus Bakke Botnan, and Steve Oudot. Stable vectorization of multiparameter persistent homology using signed barcodes as measures. In Advances in Neural Information Processing Systems (NeurIPS), 2023. URL: http://papers.nips.cc/paper_files/paper/2023/hash/d75c474bc01735929a1fab5d0de3b189-Abstract-Conference.html.
- [49] Michèle Loupias. Indecomposable representations of finite ordered sets. In Representations of Algebras, pages 201–209. Springer, 1975.
- [50] Klaus M. Lux and Magdolna Szőke. Computing Decompositions of Modules over Finite-Dimensional Algebras. Experimental Mathematics, 16(1):1–6, 2007. doi:10.1080/10586458.2007.10128988.
- [51] Alexander McCleary and Amit Patel. Edit distance and persistence diagrams over lattices. SIAM Journal on Applied Algebra and Geometry, 6(2):134–155, 2022. doi:10.1137/20M1373700.
- [52] Dmitriy Morozov and Amit Patel. Output-sensitive computation of generalized persistence diagrams for 2-filtrations, 2023. arXiv:2112.03980.
- [53] Richard A. Parker. The computer calculation of modular characters (the meat-axe). In Computational Group Theory, pages 267–274. Academic Press, 1984.
- [54] The QPA-team. QPA - Quivers, path algebras and representations - a GAP package, Version 1.33, 2022. URL: https://folk.ntnu.no/oyvinso/QPA/.
- [55] Lajos Rónyai. Computing the structure of finite algebras. Journal of Symbolic Computation, 9:355–373, 1990. doi:10.1016/S0747-7171(08)80017-X.
- [56] Hermann Schubert. Anzahl-Bestimmungen für lineare Räume beliebiger Dimension. Acta Mathematica, 8:97–118, 1900. doi:10.1007/BF02417085.
- [57] Magdolna Szőke. Examining Green Correspondents of Weight Modules. PhD thesis, RWTH Aachen, 1998.
- [58] Oliver Vipond. Multiparameter persistence landscapes. Journal of Machine Learning Research, 21:1–38, 2020. URL: http://jmlr.org/papers/v21/19-054.html.
- [59] Ron Wein, Eric Berberich, Efi Fogel, Dan Halperin, Michael Hemmer, Oren Salzman, and Baruch Zukerman. 2D arrangements. In CGAL User and Reference Manual. CGAL Editorial Board, 6.0.1 edition, 2024. URL: https://doc.cgal.org/6.0.1/Manual/packages.html#PkgArrangementOnSurface2.
- [60] Cheng Xin, Soham Mukherjee, Shreyas N. Samaga, and Tamal K. Dey. GRIL: A -parameter persistence based vectorization for machine learning. In Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), pages 313–333, 2023. URL: https://proceedings.mlr.press/v221/xin23a.html.