Non-Boolean OMv: One More Reason to Believe Lower Bounds for Dynamic Problems
Abstract
Most of the known tight lower bounds for dynamic problems are based on the Online Boolean Matrix-Vector Multiplication (OMv) Hypothesis, which is not as well studied and understood as some more popular hypotheses in fine-grained complexity. It would be desirable to base hardness of dynamic problems on a more believable hypothesis. We propose analogues of the OMv Hypothesis for variants of matrix multiplication that are known to be harder than Boolean product in the offline setting, namely: equality, dominance, min-witness, min-max, and bounded monotone min-plus products. These hypotheses are a priori weaker assumptions than the standard (Boolean) OMv Hypothesis and yet we show that they are actually equivalent to it. This establishes the first such fine-grained equivalence class for dynamic problems.
Keywords and phrases:
Fine-grained complexity, OMv hypothesis, reductions, equivalence classCopyright and License:
2012 ACM Subject Classification:
Theory of computation Data structures design and analysis ; Theory of computation Complexity classesAcknowledgements:
We thank anonymous reviewers for pointing us to the literature on the cell-probe model and for many useful hints on improving the manuscript.Funding:
Part of this work was done when both authors were at Max Planck Institute for Informatics.Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The job of a dynamic algorithm is to keep its output up to date whenever its input undergoes a local change, e.g., maintaining a shortest – path in a graph while it undergoes vertex deletions. Ideally each such update should take at most a polylogarithmic time, but at the very least it should be faster than it takes to recompute a solution from scratch. Despite great progress in the field, for many dynamic problems that goal is beyond the reach of current algorithmic techniques. Starting from the seminal paper by Pătraşcu [19], we often get to explain this hardness by fine-grained conditional lower bounds.
Most of the known tight lower bounds for dynamic problems are based on the OMv Hypothesis [11]. This hypothesis is not as widely studied and as well understood as some other hypotheses in fine-grained complexity, such as SETH, 3SUM Hypothesis, and APSP Hypothesis (see, e.g., [22]). It would be more desirable to base hardness of dynamic problems on these more popular (and hence also more believable) assumptions. Unfortunately, the existing lower bounds conditional on them are often not tight for dynamic problems. It seems likely that these hypotheses are not strong enough to explain the complexity of many dynamic problems. We may need to search for a different approach to the following glaring question:
Can we have tight lower bounds for dynamic problems
based on a hypothesis that is more believable than OMv?
Recall that the OMv Hypothesis is about Boolean product; it asserts that computing the Boolean product of two matrices requires cubic time if the second matrix is given column by column in an online fashion. In the static (i.e., non-online) setting, Boolean product is arguably the easiest of the many studied variants of matrix multiplication. Indeed, it can be computed in time , where [2] is the (integer) matrix multiplication exponent.111Moreover, the fastest known “combinatorial” algorithm for the Boolean product [1] does not give a similar improvement for the integer product.
In the static matrix product world, if the running time is on the “fast” end of the spectrum, then the min-plus product (related to distance computations in graphs) marks the other end: the fastest known algorithm shaves only a subpolynomial factor over the naive cubic running time [26], and the APSP Hypothesis from fine-grained complexity essentially says that no -time algorithm is possible [23].
There are also numerous variants of matrix multiplication that seem to have an “intermediate” hardness on this spectrum. Examples include min-max product [21, 9], min-witness product [8], equality product (a.k.a. Hamming product [18]), dominance product [17], threshold product [12], plus-max product [20], product [13], and many others. The fastest known algorithms for these problems have running times that are functions of the matrix multiplication exponent , and they converge to when . Although it is still an open problem whether this is necessarily the right complexity for all these problems, there are some partial results in the form of tight fine-grained reductions that suggest it might be the case [13, 15, 24].
1.1 Our contributions
The OMv Hypothesis (and the lower bounds it implies) would be a priori more believable if we could replace in its statement the Boolean product with some other product known to be harder in the static world. For instance, we can define in a similar way the Min-Max-OMv problem using the min-max product: Pre-process a matrix , and then answer (one by one, in an online fashion) queries, each of them asking, given a vector , to compute the min-max product of and , i.e., the vector such that
Then we can state a corresponding hypothesis, let us call it the Min-Max-OMv Hypothesis, asserting that the Min-Max-OMv problem cannot be solved in truly subcubic time , for any . This of course brings a question:
Can we still give tight reductions from Min-Max-OMv to those dynamic problems
for which there are known reductions from (Boolean-)OMv?
It turns out, yes, we can! Somewhat surprisingly222This is perhaps less surprising to those readers who are familiar with how the subcubic algebraic algorithms for the intermediate problems work; see Section 1.2., we can even give a tight reduction from Min-Max-OMv to Boolean-OMv. This shows that the Min-Max-OMv Hypothesis and the standard (Boolean-)OMv Hypothesis are actually equivalent. Moreover, the min-max product is not a unique example of this phenomenon. We show more equivalent hypotheses based on several matrix products, which are harder than the Boolean product in the static setting (see Section 2 for the formal definitions).
Theorem 1.
The following problems either all have truly subcubic algorithms or none of them do:
-
Boolean-OMv;
-
Equality-OMv;![[Uncaptioned image]](x2.png)
-
Dominance-OMv;![[Uncaptioned image]](x3.png)
-
Min-Witness-OMv;
-
Min-Max-OMv;
-
Bounded Monotone Min-Plus-OMv.
For the Bounded Monotone Min-Plus-OMv problem the implied algorithm is randomized.
This conglomeration of equivalent problems can be interpreted as making the OMv Hypothesis more believable, and the conditional lower bounds based on it stronger. We recall two analogous conglomerations: the NP-complete problems and the problems equivalent to the All-Pairs Shortest Paths (APSP) problem under subcubic reductions. One of the reasons behind the great success of the theory of NP-completeness is its structural simplicity: many natural problems are NP-complete, and solving any of them efficiently would solve all of them efficiently, so they are all hard for the same underlying reason. For all the multi-faceted NP-complete problems, researchers from different areas have not managed to find a single efficient algorithm, so it seems very plausible that no such algorithm exists. The fine-grained complexity theory at large does not enjoy a similar simplicity: there are multiple independent hardness assumptions, and the reductions often go only in one way, establishing hardness but not equivalence. A notable exception333Another such exception is a class of vector problems equivalent to the Orthogonal Vectors problem [7]. is the APSP problem, which is conjectured to require cubic time and there are many other problems equivalent to it via subcubic reductions [23]. No truly subcubic algorithms have been found so far for any of these problems, which strengthens the APSP Hypothesis. Our Theorem 1 establishes another such class of problems equivalent under fine-grained subcubic reductions.
Subcubic algorithms in the cell-probe model.
We remark that the above equivalences also hold in the cell-probe model, where the running time is measured solely by the number of memory cell probes, and computation is free of charge. To be more specific, from our proofs it follows that each of the above problems can be solved by solving instances of any other of those problems and performing some additional computation that runs in time , in the word RAM model, where is a parameter to be chosen. Since any algorithm that runs sequentially in time in the word RAM model can access only a truly subcubic number of memory cells, the subcubic equivalence in the cell-probe model follows. Larsen and Williams [14] showed that the OMv Hypothesis is actually false in the cell-probe model: Every query can be computed with probes, where is the word size. This means that all the other problems listed above have truly subcubic algorithms in the cell-probe model as well.
1.2 Technical overview
We prove Theorem 1 by a series of fine-grained reductions, depicted in Figure 1. The reductions are inspired by known subcubic algebraic algorithms for the corresponding (static) matrix product problems. Such algorithms have running times that are functions of the (integer) matrix multiplication exponent . For example, the fastest known (static) min-max product algorithm [9] runs in time , which is subcubic for any value of . In other words, this algorithm is an implicit subcubic reduction from the (static) min-max product to the (static, integer) matrix multiplication problem. The same is true about all the other intermediate problems we consider in this paper. Our technical contribution is adapting these implicit reductions so that (1) they work in the online setting, and (2) they require only Boolean (and not integer) product.
Detour into static combinatorial algorithms.
Before we say a few words about each of the reductions in the paper, let us take a detour from online problems so that we can explicitly state the observation that we already hinted at in the previous paragraph.
Observation 2.
The static variants of all the matrix products listed in Theorem 1 are equivalent to each other under subcubic reductions, and the reductions are “combinatorial”. That is, if any of the static problems admits a subcubic “combinatorial” algorithm, then all of them do.
Why is this the case? In one direction, all those non-Boolean products can encode the Boolean product as a special case. In the other direction, all those non-Boolean products admit strongly subcubic algorithms with running times of the form for a function that happens to satisfy for every . Therefore, if we could replace in such an algorithm every use of fast algebraic (integer) matrix multiplication with a hypothesized -time “combinatorial” Boolean matrix multiplication algorithm, we would get a “combinatorial” algorithm for the corresponding non-Boolean product with the running time exponent . Such a replacement is possible whenever the non-Boolean algorithm does not use (or can be modified not to use) the full counting power of integer matrix multiplication but only checks which output entries are nonzero. This is explicitly the case for some of the non-Boolean products we consider (e.g., the min-witness product [8]). For some others, e.g., the bounded monotone min-plus product [25, 10], such a modification is nontrivial, but still possible.
While this observation is simple and in hindsight not really surprising, we are not aware of it being folklore in the field of fine-grained complexity, so we state it for completeness.
Subcubic reductions and where to find them.
In Section 3 we show a subcubic reduction from Equality-OMv to Boolean-OMv (Theorem 10), which can be seen as an adaptation to the online setting of the sparse matrix multiplication algorithm of Yuster and Zwick [27]. Specifically, our algorithm for Equality-OMv handles the most frequently appearing values with the help of Boolean-OMv queries (about indicator vectors of those values). The remaining less frequent values can then be handled with a brute-force approach within the target running time bound.
In Section 4 we show how a subcubic algorithm for Dominance-OMv would yield a subcubic algorithm for Min-Max-OMv (Theorem 11). The proof is inspired by the known (static) min-max product algorithms [21, 9], but it is at the same time simpler, because we do not have to optimize the dependence on in the running time of the resulting algorithm. In short, the algorithm splits each row of into buckets based on the values of the entries, and makes many Dominance-OMv queries in order to establish for each output entry to which bucket of the corresponding row it belongs; then, the algorithm exhaustively searches through the entire bucket.
In Section 5 we show a reduction from Bounded Monotone Min-Plus-OMv to Equality-OMv. On a high level it follows some of the previous (static) algorithms for the bounded monotone min-plus product [25, 10]. However, it also gives a fresh perspective on the problem, because those previously known algorithms use a generalization of the min-witness and bounded (non-monotone) min-plus products (see [25, Theorem 1.2]), while ours deviates from this approach by using the equality product.
1.3 Related work
Bringmann et al. [4] take a different approach at strengthening the OMv Hypothesis. They propose a hypothesis about the complexity of determining if a (nondeterministic) finite automaton accepts a word, and show that this hypothesis implies the OMv Hypothesis. While their new hypothesis is not as well supported as the three main fine-grained complexity hypotheses, it is remarkable that it is a statement about a static problem implying a tight lower bound for an online problem.
In a very recent work, Liu [16] shows that OMv is equivalent to the online problem of maintaining a -approximate vertex cover in a fully dynamic bipartite graph.
To the best of our knowledge, the only other work that considers a variant of OMv for a non-Boolean product is by Chen et al. [6]. They use an assumption that the Min-Plus-OMv requires cubic time in order to separate partially retroactive from fully retroactive data structures. We note that this assumption seems too strong to be equivalent to the OMv Hypothesis. In particular, any “too simple” reduction from Min-Plus-OMv to Boolean-OMv would morally translate to a subcubic algorithm for the (static) Min-Plus Product problem, refuting the APSP Hypothesis.
1.4 Open problems
In this paper we manage to reduce to Boolean-OMv from OMv variants that do not involve counting. We leave it open whether a subcubic algorithm for Boolean-OMv would imply subcubic OMv algorithms for, e.g., the counting variants of the equality and dominance products (i.e., , and , respectively), or at least for the standard integer product ().
These open problems relate to the general quest for fine-grained counting-to-decision reductions. Chan, Vassilevska Williams, and Xu [5] gave such reductions for the Min-Plus Product, Exact Triangle, and 3SUM problems. Somewhat ironically, their reductions crucially rely on a fast algebraic algorithm for (static) integer matrix multiplication, so it seems unlikely that their techniques could be used to resolve the above open problems, which are about online problems.
2 Preliminaries
2.1 Notation
We use .
2.2 Problems
In this section we formally define all the problems that appear in Theorem 1. Since the definitions are similar to each other, we underline the differences between them.
Definition 3 (Boolean-OMv).
We are first given for preprocessing a Boolean matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the Boolean product defined by
We need to answer queries one by one in an online fashion, i.e., we have to output before we can receive .
Definition 4 (Equality-OMv).
We are first given for preprocessing an integer matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the ![]()
We need to answer queries one by one in an online fashion, i.e., we have to output before we receive .
Definition 5 (Dominance-OMv).
We are first given for preprocessing an integer matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the ![]()
We need to answer queries one by one in an online fashion, i.e., we have to output before we receive .
Definition 6 (Min-Witness-OMv).
We are first given for preprocessing a Boolean matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the min-witness product defined by
We need to answer queries one by one in an online fashion, i.e., we have to output before we can receive .
Definition 7 (Min-Max-OMv).
We are first given for preprocessing an integer matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the min-max product defined by
We need to answer queries one by one in an online fashion, i.e., we have to output before we receive .
Definition 8 (Bounded Monotone Min-Plus-OMv).
We are first given for preprocessing an integer matrix , and then we need to answer queries: In the -th query, we are given a column vector , and we have to compute the min-plus product defined by
We need to answer queries one by one in an online fashion, i.e., we have to output before we receive . We are guaranteed that at least one of the following conditions holds:
-
each row of is nondecreasing, i.e., ;
-
each column of is nondecreasing, i.e., ;
-
each is nondecreasing, i.e., ;
-
for every , is a nondecreasing function of , i.e., ;
or it holds with “nondecreasing” replaced by “nonincreasing”.
2.3 Hypotheses
Each of the problems defined above admits a naive cubic time algorithm, and for each of them we can conjecture that it is optimal up to subpolynomial factors.
Definition 9 (*-OMv Hypotheses).
For Boolean, ![]()
![]()
3 Reduction from Equality-OMv to Boolean-OMv
In this section we give a subcubic reduction that can be seen as an easy adaptation to the online setting of the sparse matrix multiplication algorithm of Yuster and Zwick [27].
Theorem 10.
If Boolean-OMv can be solved in time , for some , then ![]()
Proof.
Recall that denotes the input matrix given for preprocessing in the Equality-OMv problem. Let be a parameter. For every and every , let be the -th most frequent value appearing in the -th column of matrix (if there are less than distinct values in the column, let be some other arbitrary integer). Note that for any value not in , appears in the -th column of at most times; we call such values rare. In the preprocessing phase, the algorithm prepares Boolean matrices defined as follows:
Then, it instantiates the hypothesized Boolean-OMv algorithm for each of these matrices separately. Finally, for each column of , the algorithm prepares a dictionary mapping each rare value in that column to a list of indices under which that value appears in the column. This ends the preprocessing phase.
Upon receiving a query , the algorithm first initializes the output vector to all zeros. Then, for every , it creates the vector defined by
and computes the Boolean product , using the -th instantiation of the hypothesized Boolean-OMv algorithm. Each such product gets then element-wise OR-ed to the output vector. Finally, for every , if is a rare value in the -th column of matrix , the algorithm goes through the list of all indices such that (recall that there are at most of them) and for each of them sets the corresponding -th entry of the output vector to .
It is easy to see that whenever the algorithm sets an output entry to , it is because of some pair of entries and that have the same value. Conversely, if some pair of entries and have the same value, then either it is a frequent value and some contributes a , or it is a rare value and gets manually matched.
Let us analyze the running of our Equality-OMv algorithm. There are instantiations of the hypothesized Boolean-OMv algorithm, which require time in total. Then, going through all rare values takes at most time per , and thus time for all queries. This adds up to total time . By choosing we get the claimed running time .
4 Reduction from Min-Max-OMv to Dominance-OMv
In this we show how a subcubic algorithm for Dominance-OMv would yield a subcubic algorithm for Min-Max-OMv. Our proof is inspired by the static min-max product algorithms [21, 9], but it is at the same time simpler, because we do not optimize the dependence on in the running time of the resulting algorithm.
Theorem 11.
If ![]()
Proof.
Let be a parameter. For every , let be the sorted -th row of the input matrix . Consider partitioning each into buckets of consecutive elements, with at most elements per bucket. For every , let be the matrix defined as follows:
Note that each row of contains finite entries.444If there are multiple entries with the same value, they may land in different buckets.
In the preprocessing phase, the algorithm instantiates the hypothesised Dominance-OMv algorithm for each of the matrices , and also for the matrix .555Formally, the Dominance-OMv algorithm may not accept infinite entries in the input, but we can replace each with , where denotes the largest absolute value of any entry in , and each entry greater than in any query vector with .
Upon receiving a query , the algorithm proceeds to compute the product in two steps. First, for every , it computes the minimum such that , and stores the results in a column vector . Second, for every , it computes the minimum such that , and stores the results in a column vector . At the very end the algorithm computes , for every .
In order to compute , the algorithm first asks for the dominance products , for all . Then, for each , the algorithm finds the smallest such that , which corresponds to finding the first bucket in containing an element greater666This is because the entries in and are negated. than or equal to the corresponding element in . Hence, the algorithm can scan the elements in this bucket and pick the smallest one that is larger than or equal to the corresponding element in ; this element is then stored in .
Let us analyze the cost of computing ’s over the span of queries. The dominance products require time in total. On top of that, for each of the queries and for each of the output coordinates, the algorithm scans one bucket of size , which takes time in total. All together, the algorithm spends time on computing ’s.
Next, it is almost symmetric to calculate . The algorithm sorts the entries of into an ordered list , and partitions into buckets, with at most elements per bucket. For each bucket , the algorithm computes the dominance product , where is the column vector such that
Then, for each , the algorithm looks for the smallest such that , and scans the elements in the -th bucket looking for the smallest that is greater than or equal to the corresponding . By the same argument as before, computing all ’s takes time .
5 Reduction from Bounded Monotone Min-Plus-OMv to Equality-OMv
In this section we show a reduction from Bounded Monotone Min-Plus-OMv to Equality-OMv. We follow the high-level approach of some of the previous static algorithms for the bounded monotone min-plus product [25, 10]. However, we deviate from that approach by using the equality product where the known algorithms use a generalization of the min-witness and bounded (non-monotone) min-plus products (see [25, Theorem 1.2]).
Theorem 12.
If ![]()
Before we present the algorithm itself let us introduce some notation and prove some preliminary facts. Let be a parameter. For a fixed query vector , let
Be mindful that it is not necessarily the case that . Finally, for every , let us define the set of candidates for to be
Lemma 13.
It suffices to check only in order to compute , i.e.,
Proof.
First, for any pair , due to rounding down we have
Now, suppose that is a witness for , and is a witness for , i.e., , and . We derive that
Therefore, we have . Since the matrix entries all take integer values, we have that if is a witness for , then it must satisfy that , i.e., .
Now we argue that small sets of candidates can be enumerated efficiently.
Lemma 14.
For a fixed query vector , there is an algorithm that runs in time and lists all elements of all sets such that . In the case that is a (weakly) monotone function of (i.e., the 4-th case in Definition 8) this running time is amortized over query vectors.
Proof.
We consider four cases, based on the direction of the monotonicity:
(1) Each column of is monotone.
In this case also the columns of are monotone, and their entries are bounded by . The algorithm uses a self-balancing binary search tree (BST) to maintain, while iterates from to , the set of pairs
Computing is the standard tree operation of querying for the minimum. Moreover, the BST can report the number of elements smaller than a certain value in time , and enumerate them in time proportional to that number. This allows the algorithm to determine the size of quickly, and enumerate it if it is small. As iterates from to , the algorithm only needs to update the elements where there is an increase (or decrease) from to . In each column of there are at most such changes, thanks to the boundedness and monotonicity.888The locations of all such changes can be, e.g., precomputed before processing any query, or computed during each query using binary search in time per each location. Therefore the total number of updates over the iterations is at most , and each update takes time . The time spent on listing elements of (for all ) is .
(2) For each , is a monotone function of .
This case is very similar to the previous one. The algorithm maintains (over the span of queries) a separate BST for each , and uses it to compute for all ’s. When there is an increase (or decrease) from to , the algorithm has to update an element in all trees, but this happens at most times for each , so times for all ’s. Hence, the total time spent on such updates over the course of queries is , and the amortized time per query is .
(3) Each row of is monotone.
Due to the monotonicity, we can think of the -th row of , for each , as consisting of contiguous blocks of identical entries, i.e., . Upon receiving a query vector , the algorithm builds (in linear time) a range minimum query (RMQ) data structure (see, e.g., [3]) in order to compute in constant time the minimum entry of in each of the blocks, i.e., . Adding each of these minima to their corresponding values from gives a list of candidate values for ’s, i.e.,
Thus, we already know how to compute is time . Now let us explain how to extend this idea to also list elements of all small enough ’s. For each value that appears in , the algorithm calculates the sorted sequence of indices under which this value appears in . This allows computing in time how many times a given value appears in a given range of indices in ; indeed, it boils down to performing two binary searches of the two endpoints of the range in the sequence corresponding to the given value. Furthermore, all these appearances can be enumerated in time proportional to their count. For each block such that the algorithm enumerates all appearances of and in the range in , and adds them to . If the total size of would exceed , the algorithm stops the enumeration and proceeds to the next block. Similarly, for each block such that the algorithm enumerates all appearances of .
(4) Each is monotone.
This case is symmetric to the previous one. The difference is that now the algorithm splits into blocks, and prepares an RMQ data structure for each row of .
Now we are ready to present our subcubic algorithm for Bounded Monotone Min-Plus OMv, assuming a subcubic algorithm for Equality-OMv.
Proof of Theorem 12.
In the preprocessing, the algorithm samples uniformly and independently at random a set of columns of , of size . For each , the algorithm prepares an Equality-OMv data structure for matrix obtained from by subtracting the -th column from all the columns, i.e.,
The algorithm handles each query in two independent steps. The goal of the first step is to compute for those that have , and the goal of the second step is to compute for with .
First step.
Second step.
In the second step, the algorithm must compute the remaining ’s, i.e., those for which ’s contain too many elements to be handled in the first step. To this end, for every and every the algorithm computes the equality product . For every , if =1, then there must exist such that
and hence
The algorithm therefore adds to the list of possible values for , and at the end of the process it sets each to the minimum over those values.
Analysis of the second step.
We now argue that if (which holds with high probability when via a standard hitting set argument, see below), then the algorithm correctly computes in the second step. Indeed, pick and let be a witness for , i.e., . Let . Clearly, =1, so it only remains to show that . Obviously, , because minimizes . Now let us upper bound the offset . Since , we have , and hence
Moreover, , and therefore
We conclude that , as required.
It remains to analyze the success probability of the whole algorithm. For a fixed output index such that , the probability that the algorithm failed to sample an element from in all rounds is at most . By a union bound over all output indices for each of the queries, the algorithm succeeds to correctly compute all output entries with probability at least .
Running time.
The first step of each query (Lemma 14) takes time , summing up to for all queries. Regarding the second step, for each query the algorithm computes equality matrix-vector products, and over the course of queries this takes time . The total running time is therefore .
6 Remaining reductions
In this section we state the remaining easy reductions that complete the proof of Theorem 1.
6.1 Reduction from Dominance-OMv to Equality-OMv
Observation 15.
If ![]()
![]()
Proof.
The proof follows a folklore argument, see, e.g., [13]. It uses the fact that, for any two non-negative integers and , it holds that if and only if there exists such that
-
the -th least significant bit of is ; and
-
the -th least significant bit of is ; and
-
agrees with on all bits higher than the -th least significant, i.e., .
Moreover, without loss of generality, all the input numbers are integers between and . Indeed, in the preprocessing, each entry of can be replaced by its rank in the sorted order of all entries of ; then, during a query, each entry of can be replaced by the rank of the smallest entry of greater than or equal to it. Last but not least, if and only if , because the input numbers are integers. Hence, the algorithm sets, for ,
and uses the fact that if and only if .
6.2 Reduction from Min-Witness-OMv to Min-Max-OMv
Observation 16.
If Min-Max-OMv can be solved in time , then Min-Witness-OMv can be solved in time .
Proof.
The proof follows another folklore argument, see, e.g., [15]. The algorithm sets
and uses the fact .
6.3 Reduction from Boolean-OMv to Bounded Monotone Min-Plus-OMv
Observation 17.
If Bounded Monotone Min-Plus-OMv can be solved in time , then Boolean-OMv can be solved in time .
Proof.
The algorithm sets
and uses the fact that if and only if . We remark that the above reduction produces Min-Plus-OMv instances that are monotone in all four directions simultaneously, while our Bounded Monotone Min-Plus-OMv algorithm of Theorem 12 works already for instances with monotonicity in one (arbitrarily chosen) direction.
References
- [1] Amir Abboud, Nick Fischer, Zander Kelley, Shachar Lovett, and Raghu Meka. New graph decompositions and combinatorial boolean matrix multiplication algorithms. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 935–943. ACM, 2024. doi:10.1145/3618260.3649696.
- [2] Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. More asymmetry yields faster matrix multiplication. In Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, pages 2005–2039. SIAM, 2025. doi:10.1137/1.9781611978322.63.
- [3] Michael A. Bender and Martin Farach-Colton. The LCA problem revisited. In LATIN 2000: Theoretical Informatics, 4th Latin American Symposium, volume 1776 of Lecture Notes in Computer Science, pages 88–94. Springer, 2000. doi:10.1007/10719839_9.
- [4] Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA acceptance hypothesis: Non-combinatorial and dynamic lower bounds. In 15th Innovations in Theoretical Computer Science Conference, ITCS 2024, volume 287 of LIPIcs, pages 22:1–22:25. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ITCS.2024.22.
- [5] Timothy M. Chan, Virginia Vassilevska Williams, and Yinzhan Xu. Fredman’s trick meets dominance product: Fine-grained complexity of unweighted APSP, 3SUM counting, and more. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 419–432. ACM, 2023. doi:10.1145/3564246.3585237.
- [6] Lijie Chen, Erik D. Demaine, Yuzhou Gu, Virginia Vassilevska Williams, Yinzhan Xu, and Yuancheng Yu. Nearly optimal separation between partially and fully retroactive data structures. In 16th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2018, volume 101 of LIPIcs, pages 33:1–33:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPIcs.SWAT.2018.33.
- [7] Lijie Chen and Ryan Williams. An equivalence class for orthogonal vectors. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, pages 21–40. SIAM, 2019. doi:10.1137/1.9781611975482.2.
- [8] Artur Czumaj, Miroslaw Kowaluk, and Andrzej Lingas. Faster algorithms for finding lowest common ancestors in directed acyclic graphs. Theor. Comput. Sci., 380(1-2):37–46, 2007. doi:10.1016/J.TCS.2007.02.053.
- [9] Ran Duan and Seth Pettie. Fast algorithms for (max, min)-matrix multiplication and bottleneck shortest paths. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009, pages 384–391. SIAM, 2009. doi:10.1137/1.9781611973068.43.
- [10] Yuzhou Gu, Adam Polak, Virginia Vassilevska Williams, and Yinzhan Xu. Faster monotone min-plus product, range mode, and single source replacement paths. In 48th International Colloquium on Automata, Languages, and Programming, ICALP 2021, volume 198 of LIPIcs, pages 75:1–75:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.ICALP.2021.75.
- [11] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, pages 21–30. ACM, 2015. doi:10.1145/2746539.2746609.
- [12] Piotr Indyk, Moshe Lewenstein, Ohad Lipsky, and Ely Porat. Closest pair problems in very high dimensions. In Automata, Languages and Programming: 31st International Colloquium, ICALP 2004, volume 3142 of Lecture Notes in Computer Science, pages 782–792. Springer, 2004. doi:10.1007/978-3-540-27836-8_66.
- [13] Karim Labib, Przemyslaw Uznanski, and Daniel Wolleb-Graf. Hamming distance completeness. In 30th Annual Symposium on Combinatorial Pattern Matching, CPM 2019, volume 128 of LIPIcs, pages 14:1–14:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.CPM.2019.14.
- [14] Kasper Green Larsen and R. Ryan Williams. Faster online matrix-vector multiplication. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, pages 2182–2189. SIAM, 2017. doi:10.1137/1.9781611974782.142.
- [15] Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams. Monochromatic triangles, intermediate matrix products, and convolutions. In 11th Innovations in Theoretical Computer Science Conference, ITCS 2020, volume 151 of LIPIcs, pages 53:1–53:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.ITCS.2020.53.
- [16] Yang P. Liu. On approximate fully-dynamic matching and online matrix-vector multiplication. In 65th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2024, pages 228–243. IEEE, 2024. doi:10.1109/FOCS61266.2024.00006.
- [17] Jirí Matousek. Computing dominances in eˆn. Inf. Process. Lett., 38(5):277–278, 1991. doi:10.1016/0020-0190(91)90071-O.
- [18] Kerui Min, Ming-Yang Kao, and Hong Zhu. The closest pair problem under the Hamming metric. In Computing and Combinatorics, 15th Annual International Conference, COCOON 2009, volume 5609 of Lecture Notes in Computer Science, pages 205–214. Springer, 2009. doi:10.1007/978-3-642-02882-3_21.
- [19] Mihai Pătraşcu. Towards polynomial lower bounds for dynamic problems. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, pages 603–610. ACM, 2010. doi:10.1145/1806689.1806772.
- [20] Virginia Vassilevska. Efficient algorithms for path problems in weighted graphs. PhD thesis, Carnegie Mellon University, USA, 2008.
- [21] Virginia Vassilevska, Ryan Williams, and Raphael Yuster. All pairs bottleneck paths and max-min matrix products in truly subcubic time. Theory Comput., 5(1):173–189, 2009. Announced at STOC 2007. doi:10.4086/TOC.2009.V005A009.
- [22] Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the International Congress of Mathematicians, ICM 2018, pages 3447–3487. World Scientific, 2018. doi:10.1142/9789813272880_0188.
- [23] Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix, and triangle problems. J. ACM, 65(5):27:1–27:38, 2018. Announced at FOCS 2010. doi:10.1145/3186893.
- [24] Virginia Vassilevska Williams and Yinzhan Xu. Monochromatic triangles, triangle listing and APSP. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, pages 786–797. IEEE, 2020. doi:10.1109/FOCS46700.2020.00078.
- [25] Virginia Vassilevska Williams and Yinzhan Xu. Truly subcubic min-plus product for less structured matrices, with applications. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, pages 12–29. SIAM, 2020. doi:10.1137/1.9781611975994.2.
- [26] R. Ryan Williams. Faster all-pairs shortest paths via circuit complexity. SIAM J. Comput., 47(5):1965–1985, 2018. Announced at STOC 2014. doi:10.1137/15M1024524.
- [27] Raphael Yuster and Uri Zwick. Fast sparse matrix multiplication. ACM Trans. Algorithms, 1(1):2–13, 2005. Announced at ESA 2004. doi:10.1145/1077464.1077466.
