A Faster Parametric Search for the Integral Quickest Transshipment Problem
Abstract
Algorithms for computing fractional solutions to the quickest transshipment problem have been significantly improved since Hoppe and Tardos first solved the problem in strongly polynomial time. For integral solutions, however, no structural improvements on their algorithm itself have yet been proposed. Runtime improvements are limited to general progress on submodular function minimization (SFM), which is an integral part of Hoppe and Tardos’ algorithm. In fact, SFM constitutes the main computational load of the algorithm, as the runtime is blown up by using it within Megiddo’s parametric search algorithm. We replace this part of Hoppe and Tardos’ algorithm with a more efficient routine that solves only a linear number of SFM and, in contrast to previous techniques, exclusively uses minimum cost flow algorithms within Megiddo’s parametric search. Our approach improves the state-of-the-art runtime from down to 222We use to suppress polylogarithmic terms., where is the number of terminals and is the number of arcs.
Keywords and phrases:
Flow over time, dynamic transshipment, quickest transshipment, parametric submodular functions, efficient algorithmsCopyright and License:
2012 ACM Subject Classification:
Theory of computation Network flowsAcknowledgements:
We would like to express our gratitude to the anonymous reviewers for their valuable feedback and constructive comments, which have greatly contributed to improving the presentation of this paper.Funding:
This work is co-funded by the German research council (DFG) – project number 442047500 – Collaborative Research Center Sparsity and Singular Structures (SFB 1481) and by the German research council (DFG) Research Training Group 2236 UnRAVeL.Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Network flows over time, also referred to as dynamic flows, extend classical static network flows by a time component. They provide a powerful tool for modeling real-world problems in traffic engineering, building evacuation, and logistics. Over the last decades, a wide range of optimization problems dealing with flows over time have been studied. The maximum flow over time problem was studied in the seminal work of Ford and Fulkerson [3], who showed that the problem can be solved in polynomial time by a reduction to the static minimum cost flow problem. The quickest flow problem asks for the minimum time-horizon such that a provided demand of units of flow can be sent from a single source to a single sink . While a straightforward approach is to combine an algorithm for maximum flows over time with a parametric search algorithm [2], recent results have shown that cost scaling algorithms for minimum cost flows can be modified in order to efficiently compute quickest flows [7, 10].
The quickest transshipment problem generalizes the quickest flow problem by allowing for supply and demand at multiple sources and sinks. It is one of the most fundamental problems in the field of network flows over time and, as recently stated by Skutella [14], “arguably the most difficult flow over time problem that can still be solved in polynomial time.” As in the quickest flow problem, our goal is to send the required flow from sources to sinks while simultaneously minimizing the time horizon.
Similar to the quickest flow problem, the quickest transshipment problem can be solved by determining the minimum time horizon via parametric search. Using this idea, Hoppe and Tardos [4] showed that the quickest transshipment problem can be solved in strongly polynomial time, that is, their algorithm’s runtime is polynomially bounded in the number of nodes , number of arcs , and the combined number of sources and sinks .
Recently, faster algorithms have been developed. Notably, Schlöter, Skutella, and Tran [12] proposed an algorithm with a time complexity of . Unfortunately, these performance improvements come at the expense of fractional solutions, which may be undesirable for applications that do not allow flow particles to be disassembled. While some of the results speed up the search for the optimal time horizon, no improvements have yet been proposed for finding integral flows over time. Hence, the state-of-the-art complexity for the integral quickest transshipment problem remains unchanged at [12].
Our Contribution
We propose improvements to Hoppe and Tardos’ algorithm by replacing two computationally expensive subroutines in which submodular function minimization is combined with Megiddo’s parametric search. By doing so, we reduce the state-of-the-art runtime for the integral quickest transshipment problem from to . This narrows the gap to the fractional quickest transshipment problem, which can be solved in time using the algorithm by Schlöter, Skutella, and Tran [12].
2 Preliminaries
Given a directed graph with vertices and arcs , we define a dynamic network as a triple with capacity and transit time for each arc . For a given dynamic network , the set denotes the network’s nodes, while refers to the network’s arcs. Throughout this paper, we denote the number of nodes by and the number of arcs by .
A flow over time is a family of functions , representing the in-flow rates for each arc for every point in time until the end of the time horizon . The -many flow units entering arc at time arrive at time at the end node of . While we only require to satisfy weak flow conservation, Hoppe and Tardos’ algorithm computes an optimal solution satisfying even the strict flow conservation, meaning that all flow units entering a non-sink node are immediately forwarded via an outgoing arc. We refer the reader to [13] for more details on flows over time.
Note that, in contrast to Hoppe and Tardos [4] who defined a flow over time as a static flow in a time-expanded network, we use a continuous-time model. However, all prerequisites for their algorithm directly translate from the discrete- to continuous-time model as shown in [14], meaning that it can also be implemented for the continuous-time model.
For the dynamic transshipment problem, we have a triple comprising a dynamic network , a balance function with , and a time horizon . The balances describe how supply and demand are distributed across the network. A node with positive balance is a source, while a node with negative balance is a sink. Let denote the set of sources, the set of sinks, and the set of terminals. A dynamic transshipment instance is feasible if there exists a flow over time sending the supply from the sources to the sinks such that all demands are satisfied.
Definition 1.
Given a subset of terminals , the maximum out-flow out of is the value of the maximum flow over time from the sources to the sinks .
The central feasibility criterion states that the net balance must not exceed the maximum out-flow for every .
Theorem 2 (Feasibility Criterion [4]).
The dynamic transshipment instance is feasible if and only if for all .
We call a set with a violated set. In order to determine the feasibility of a given dynamic transshipment instance, it suffices to show that no violated set exists. However, while the value of , and thus of , can be computed with the Ford-Fulkerson algorithm for maximum flows over time, avoiding the enumeration of all subsets is not obvious. Fortunately, we can use the submodularity of and .
Definition 3 (Submodular Function).
A function over a finite ground set is a submodular function if for all it holds that
| (1) |
or, equivalently, if for all and it holds that
| (2) |
Given a submodular function over , we call a set a minimizer of . It is well-known that the set of minimizers of a submodular function is closed under union and intersection. Therefore, there always exists a minimal minimizer and a maximal minimizer, which are the intersection and union of all minimizers, respectively.
In the context of submodular function minimization (SFM), a submodular function is typically provided in form of an evaluation oracle with complexity . The performance of algorithms is measured in the number of oracle calls required for finding a minimizer. The fastest strongly polynomial algorithm for submodular function minimization is due to Lee, Sidford and Wong [6] with a runtime of for .
For our purpose, the evaluation oracle computes a maximum out-flow out of terminals using the Ford-Fulkerson algorithm for maximum flows over time. To this end, we introduce a super-source and a super-sink and connect them to sources and sinks , respectively, via infinite-capacity, zero-transit arcs. Then the maximum out-flow is the value of the maximum flow over time from to . This value can be computed using a static min-cost flow in or time via Orlin’s algorithm [9]. We abbreviate this runtime as .
Consequently, determining whether a dynamic transshipment instance is feasible takes or time, where is the number of terminals. To improve readability, we denote the time it takes to check if an instance with terminals, nodes and arcs is feasible by .
The Algorithm by Hoppe and Tardos
The algorithm by Hoppe and Tardos [4] was the first strongly polynomial algorithm computing quickest transshipments and remains the most efficient one for integral solutions. In the following, we assume that the minimum time horizon is provided and focus on the algorithm’s segment that computes an integral dynamic transshipment. The algorithm relies on the concept of tight orders.
Definition 4 (Tight Set and Order).
A set of terminals is called tight if holds. Let be a total order on . We call tight if is tight for all .
The general idea by Hoppe and Tardos is to construct an equivalent dynamic transshipment instance for which a tight order exists.
Theorem 5 (Reduction to Lex-Max Flows [4]).
Given a dynamic transshipment instance with a tight order over the terminals , an integral dynamic transshipment satisfying can be computed as a lex-max flow over time in time.
We refer to [14] for more details on lex-max flows over time. Although computing the integral flow is quite efficient, transforming into an equivalent instance with a tight order is computationally demanding. We propose improvements to this transformation in Section 5. Before that, we briefly discuss the approach by Hoppe and Tardos.
First, one adds a new terminal with for every terminal and then sets . Note that the set of terminals now only contains the nodes . By adding infinite-capacity, zero-transit arcs for and for , one ensures that the resulting dynamic transshipment instance is equivalent to the original instance (cf. Figure 1).
To construct an instance admitting a tight order, we iteratively shift supply / demand from terminals to new terminals . In this paper, we call the drained terminals and the filled terminals. We refer the reader to Hoppe and Tardos [4] and the description in the extended paper [1] for a general overview of the algorithm. We focus on the two subroutines, MaximizeAlpha and MinimizeDelta, which compute the amount of supply / demand that is shifted from to .
When discussing both subroutines, we assume that a feasible dynamic transshipment instance is given, with a set of terminals consisting of the drained terminals , added in the first step, and all filled terminals that were introduced in previous iterations. Furthermore, we are given two tight sets and a drained terminal satisfying .
Note that the instance , its set of terminals as well as , , and are the input of our subroutines and vary between iterations. In contrast, every subroutine adds a new filled terminal. However, since both subroutines are discussed in isolation, we simplify the notation by referring to both filled terminals as . Throughout this paper, we define and . Similar to Hoppe and Tardos, we only discuss the case in which is a source, since the treatment of sinks is symmetrical and is sketched in the extended paper [1].
Capacity-Parametric Instances
The first subroutine starts with tight sets and and a drained source for which is not tight. Its aim is to reassign as much supply as possible to a new filled source . In doing so, we capacitate the out-flow of such that is tight and the transformed instance remains feasible.
Definition 6 (-Parametric Dynamic Network).
Given a dynamic network , a drained source and a parameter , the -parametric network is constructed by adding a new filled source and connecting it to via an -capacity, zero-transit arc .
Let be the parametric counterpart of the maximum out-flow as in Definition 1 in the parametric network . Using this notation, we define -parametric dynamic transshipment instances analogously to Hoppe and Tardos [4]. The construction of an -parametric instance is illustrated in Figure 1.
Definition 7 (-Parametric Dynamic Transshipment Instance).
Given a feasible dynamic transshipment instance , two tight sets of terminals , a drained source and a parameter , the corresponding -parametric dynamic transshipment instance consists of the following components.
-
An -parametric dynamic network as in Definition 6.
-
An -parametric balance function with for all terminals , , and , where .
We call a parameter value feasible if the corresponding -parametric dynamic transshipment instance is feasible. Determining whether a value is feasible is equivalent to checking if a violated set exists. Recall that this can be done by minimizing the parametric submodular function . A subroutine of the algorithm by Hoppe and Tardos finds a maximum feasible parameter value .
Definition 8 (MaximizeAlpha).
Given an -parametric dynamic transshipment instance, find the maximum feasible .
We denote the maximum feasible parameter value by . Hoppe and Tardos concluded that, given a feasibility oracle for taking time, the value can be found in time, where . In addition, one can combine Megiddo’s parametric search [8] with the combinatorial SFM algorithm by Orlin and Iwata [5] to achieve a strongly polynomial runtime of . We mainly improve upon the strongly polynomial approach.
Transit-Parametric Instances
The second subroutine is closely related to MaximizeAlpha. Again, we start with tight sets and and a drained source for which is not tight. The aim is to reassign as much supply as possible to a new filled source . In doing so, we set the transit time for the flow out of such that is tight and the transformed instance remains feasible.
Definition 9 (-Parametric Dynamic Network).
Given a dynamic network , a drained source and a parameter , the -parametric network is constructed by adding a terminal and connecting it to via a unit-capacity, -transit arc .
Analogously to , we denote the maximum out-flow of a subset of terminals by . Next, we combine this parametric variant of our dynamic network with a parametric balance function to form a -parametric dynamic transshipment instance (cf. Figure 2).
Definition 10 (-Parametric Dynamic Transshipment Instance).
Given a feasible dynamic transshipment instance , two tight sets of terminals , a drained source and a parameter , a -parametric dynamic transshipment instance consists of the following components.
-
A -parametric dynamic network as given in Definition 9.
-
A -parametric balance function with for all terminals , , and , where .
Again, a parameter value is feasible if the corresponding parametric dynamic transshipment instance is feasible. This can be checked by minimizing the parametric submodular function . Again, we call a set with violated. This yields the following parametric search problem.
Definition 11 (MinimizeDelta).
Given a -parametric dynamic transshipment instance, find the minimum feasible .
Again, denotes the minimum feasible parameter value. Hoppe and Tardos showed that the optimal value can be found in time, or in strongly polynomial time of using the parametric search of Megiddo [8].
The algorithm by Hoppe and Tardos performs a total of iterations, each of which calls both subroutines MaximizeAlpha and MinimizeDelta once. Therefore, the current version of the algorithm by Hoppe and Tardos takes time if both subroutines are implemented with a binary search, while Megiddo’s parametric search results in a runtime of . In the following sections, we improve the runtime of each iteration by introducing better parametric search algorithms for both subroutines.
3 Restricting the Domains of Violated Sets
Both problems MaximizeAlpha and MinimizeDelta introduced in the previous section rely on minimizing submodular functions to determine whether a parameter value is feasible. Even with state-of-the-art algorithms for SFM, this remains a computationally expensive subroutine that scales poorly with the number of terminals in the ground set.
We show that, if our parametric instance is infeasible, there always exists a violated set satisfying , which allows us to restrict the ground sets of our parametric submodular functions and . This provides a practical improvement and establishes the foundation for the following sections.
Lemma 12.
Let be a violated set for an -parametric dynamic transshipment instance with respect to a drained source . Then and . The same applies to -parametric dynamic transshipment instances.
Proof.
We only prove the statement for -parametric instances, as the reasoning is analogous for -parametric instances. Let be an arbitrary subset of terminals. Given that we assume the transshipment instance to be feasible, we have . Remember that the maximum out-flow is the value of a maximum flow over time from a super-source to a super-sink , where infinite-capacity, zero-transit arcs are used to connect to every source in and to connect every sink in to . Using this definition, we derive the following observations regarding :
-
O1
If contains neither nor , then maximum out-flow coincides with the out-flow in the original instance.
-
O2
If contains , then all flow traveling from to can bypass the -capacity arc and move along the infinite-capacity arcs and instead. As a consequence, we have .
We refer the reader back to Figure 1 for a visual intuition. Combining these observations with the parametric balances and , we prove Lemma 12 by considering the following three complementary cases.
-
(I)
If and , then it follows that
Together with Observation 2 we obtain , meaning that cannot be a violated set.
-
(II)
If and , then it follows that , which, together with Observation 2, implies that cannot be a violated set since .
-
(III)
If and , then and therefore cannot be a violated set as it follows from Observation 1 that .
Hence, can only be a violated set of if and .
Hoppe and Tardos [4] proved the property from Lemma 12 for the special case of the infeasible parameter value and used it to show that there exists a set satisfying which is violated for and tight for . We employ similar arguments to show that this also holds for all infeasible .
Lemma 13.
Let be an infeasible parameter value for MaximizeAlpha. Then there is a minimizer of with . Analogously, let be an infeasible parameter value for MinimizeDelta. Then there is a minimizer of with .
Proof.
We only prove the statement for MaximizeAlpha, as the proof for MinimizeDelta is analogous. Let be an arbitrary minimizer of with . It follows from Lemma 12 that and . Next, we show that the set is also a minimizer of . For this, we analyze the tightness of the sets , , and :
-
was chosen to be a tight set for . This also does not change in the parametric instance as and hence .
-
is tight, since .
-
is tight because is tight and directly imply .
Having shown that and are tight sets, we study how tight sets and minimizers of behave under union and intersection. For this purpose, let . It follows from submodularity of that
| (3) |
Note that both summands on the left hand side are non-positive, since otherwise one of them would be smaller than the minimum value . Hence, either both summands are negative, or one is equal to zero while the other equals the minimum value of . In other words, exactly one of the following properties must hold:
-
(1)
Both and are violated sets of .
-
(2)
Either or is a minimizer, while the other set is tight.
We apply this case distinction to the cases where and :
-
Let , then Lemma 12 states that a violated set cannot contain , implying that since . This means that is a minimizer of .
-
Consider and the minimizer . We rule out as a violated set since . Therefore, is a minimizer.
Finally, observe that the minimizer not only satisfies because but also since and are tight.
Lemma 13 allows us to determine feasibility of a parameter value by minimizing the restricted functions and with and . For future use, we define the functions , , , and analogously.
This brings us to the main result in this section.
Corollary 14.
A parameter value is feasible for MaximizeAlpha if and only if for every . Analogously, a parameter value is feasible for MinimizeDelta if and only if for every .
The obvious advantage of this result is the reduction of the domain of the submodular functions of interest to . Although we have and at the beginning, the difference becomes smaller over the course of Hoppe and Tardos’ algorithm, until both sets are eventually equal. This yields a significant practical speed-up in later iterations of the algorithm. Moreover, the reduction brings further practical and theoretical advantages, which we discuss in the following section.
4 Strong Map Sequences
We concluded the previous section with a useful restriction of our submodular functions and to sets with , while maintaining the guarantee that our parametric instance is feasible if and only if the minimizer of our restricted function is not violated. Building on this, we show that both restricted functions satisfy the strong map property (also known as decreasing differences property).
Definition 15 (Strong Map Property [11]).
Let be two submodular functions defined over the same finite ground set . We write , or , if implies
The relation is called a strong map. Submodular functions form a strong map sequence if .
Recall that the minimizers of submodular functions are closed under union and intersection, meaning that there exists a unique minimal and maximal minimizer for every submodular function. A result by Topkis [15] relates the minimal and maximal minimizers of functions that form strong map sequences.
Lemma 16 (Minimizers for Strong Map Sequences [11]).
Let be two submodular functions over the same ground set with . Denote by and the minimal and maximal minimizer of , respectively, while and are the minimal and maximal minimizer of , respectively. Then and .
Lemma 16 already gives an intuition for how the strong map property can be used for the parametric search problems MaximizeAlpha and MinimizeDelta: each choice of or gives us a new submodular function, and, if these functions form strong map sequences, the strong map property implies that we can only encounter at most distinct minimal minimizers for a monotonic sequence of parameter values.
Unfortunately, neither nor satisfy the strong map property as is222In nontrivial cases, the nested sets , , and satisfy and . The same holds for . Consequently, our proofs for Lemma 17 and Lemma 18 fail for the general functions and .. However, this changes when considering the restricted functions and defined in Section 3. Recall that these functions are defined over the ground set , and that their definition ensures that is implicitly added, while is excluded.
Lemma 17.
Let . Then .
Proof.
Our argument is structured as follows: We first prove that holds for all and then use this result to prove that also forms a strong map sequence with . The general claim then immediately follows by transitivity of .
Let be arbitrary but fixed. We construct an auxiliary network with nodes and arcs , where is the source that was replaced by in the first phase of the algorithm. For the new arc, we set and . An example network for a given source is depicted in Figure 3.
Let be the max out-flow function for network restricted to the domain . Notice how adding to a set of terminals with and is equivalent to increasing by one, as flow cannot bypass the arcs and through . Formally, it holds for every set with that
-
1.
, and
-
2.
.
Clearly, the function is also submodular. Hence, given two sets , the definition of submodularity in Equation 2 can be restated as
| (4) |
Combining Equation 4 with all our previous observations, it follows that the sets and satisfy
Hence, we have shown that holds for every parameter value . Finally, recall our definition of as for any . Due to the strong map property of , we get for all that
therefore proving that . Thus, by transitivity of , our claim holds.
Next, we utilize a similar argument to prove that forms a strong map sequence. Here, we rely on our specific definition of the -parametric network .
Lemma 18.
Let . Then .
Proof.
The general approach is identical to the proof of Theorem 17. That is, we show that forms a strong map sequence with for . Afterwards, we transfer this result to and . The general claim then directly follows from the transitivity of .
Again, let be arbitrary but fixed. We construct an auxiliary network with nodes and arcs , is the source that was replaced by in the first step of the algorithm. Additionally, let , and . An example for a source is given in Figure 4.
Again, denotes the maximum out-flow function for this network restricted to the domain . In this construction, we have replaced the arc by a path consisting of arcs and with a combined transit time of . As a consequence, the maximum out-flow remains unchanged for sets . If and , on the other hand, both sources compete for access to the arc . Then, the flow out of is maximized by sending all flow from . Formally, it holds for every set that
-
1.
, and
-
2.
These observations plus the second definition of submodularity from Equation 2 imply for all sets that
Therefore, the function forms a strong map sequence with . Finally, by the same arguments as in the proof of Lemma 17, the parametric function forms a strong map sequence with as claimed.
In the following section, we use the strong map property and the resulting nested sequences of minimizers to construct new parametric search algorithms for and .
5 An Improved Parametric Search
In this section, we adapt Schlöter’s [11] parametric search algorithm for the single-source or single-sink quickest transshipment problem to MaximizeAlpha and MinimizeDelta. We start by showing that and are monotonic in their respective parameters.
Lemma 19.
Given a set of terminals , both maps and are monotonic in the parameters and , respectively. That is, we have and for all .
Proof.
Let be arbitrary but fixed parameter values. We set and decompose (and, analogously, ) as
| (5) | ||||
Observe that we can treat the term as constant: since , the out-flow is identical for all (or ). Similarly, the value of does not depend on or .
We now prove monotonicity using this simplification. For MaximizeAlpha, the monotonicity holds if and only if is true, which follows directly from the strong map property shown in the proof of Lemma 17. Similarly, replacing with in Equation 5, it follows from that and thus holds.
The monotonicity and strong map property of and allow us to introduce new parametric search algorithms for MaximizeAlpha (cf. Algorithm 1) and MinimizeDelta (cf. Algorithm 2). In their core, they follow a rather simple approach similar to algorithms by Schlöter, Skutella, and Tran [12, 11]: we start with an infeasible parameter value or and a corresponding minimizer of or . Possible initial values are and [4]. Next, we alternate between two steps jump and check until the current minimizer is no longer violated. In the jump step, we compute the largest parameter value or the smallest parameter value such that the previous minimizer is no longer violated. Note that this step is, in essence, an integral parametric min-cost flow problem with one parametrized arc. Afterwards, the check step finds a minimizer for the new value and .
In the check step, we use the results of the previous sections in order to restrict the search for minimizers to . We will show in the following that this restriction is actually feasible. Note that the restriction not only reduces the search space in each iteration to terminals, but also limits MaximizeAlpha and MinimizeDelta to at most iterations, since the length of the chain is limited by .
Proof.
We first show by induction that if is infeasible, then there exists a minimizer of with , where .
By definition, is a minimizer of . Let and be infeasible, and assume that is a minimizer of . We have , since otherwise contradicts the choice of as feasible for . Together with Lemma 17, this implies the relation . Therefore, we have due to Lemma 16, with being the minimal minimizer of for . Since is infeasible with , and, by definition, , we even have . Hence, is a feasible choice for in Algorithm 1.
Recall that Algorithm 1 terminates after at most iterations. Let be the value returned by the algorithm, and let be the minimizer generated in the final iteration . If were not feasible, then would be a minimizer with . However, since the algorithm terminates, it follows that is feasible.
To see that is maximum, recall that is infeasible, and thus the jump step is executed at least once. By construction in the jump, the value is maximum such that the previous minimizer is no longer violated. Hence, for any , we have , so is infeasible. Therefore, is the optimal solution to MaximizeAlpha.
The proof for Algorithm 2 is analogous.
The remainder of the section is devoted to the runtime analysis. Our main improvement is due to the upper bound on the number of iterations the algorithms execute.
Theorem 21.
Algorithm 1 can be implemented to solve MaximizeAlpha in strongly polynomial time of and in weakly polynomial time of .
Proof.
The runtime is determined by the two main steps performed in each iteration:
-
Jump can be implemented using Megiddo’s parametric search [8] or binary search over the range in conjunction with a minimum cost flow algorithm. The former results in a runtime of , the latter in .
-
Check minimizes the submodular function on the restricted domain. In the worst case, this takes time.
Together with the upper bound of on the iterations of the while loop, the final runtime is or .
We obtain an analogous runtime for Algorithm 2.
Theorem 22.
Algorithm 2 can be implemented to solve MinimizeDelta in strongly polynomial time of and in weakly polynomial time of .
Proof.
The proof is analogous to that of Theorem 21. The binary search for the minimum feasible value of is done on the range , which yields the different logarithmic term.
We conclude this section with an improved runtime for computing integral dynamic transshipments compared to the state-of-the-art runtime of .
Theorem 23.
Given a dynamic transshipment instance , an integral quickest transshipment can be computed in time.
Proof.
Hoppe and Tardos [4] already proved that their algorithm for dynamic transshipment terminates after iterations, each of which consists of one call of Algorithms 1 and 2. Given the strongly polynomial runtime of for both subroutines, we obtain a worst-case complexity of .
Suppressing the polylogarithmic terms, we have a time complexity of for and of for . Overall, we obtain an improved runtime of time for the integral dynamic transshipment problem.
In order to compute a quickest transshipment, we have to determine the minimum time horizon first. This can be done by the method by Schlöter, Skutella, and Tran [12] in time. This runtime is dominated by that of our algorithm for computing the corresponding transshipment: since we assume that the network is connected, we have , and thus and . Overall, the time required to compute a quickest integral transshipment is
which shows Theorem 23.
6 Conclusion and Outlook
In this paper, we propose an improved version of the algorithm by Hoppe and Tardos for the integral quickest transshipment problem. Our approach is based on more efficient parametric search algorithms using the strong map property and yields a substantial reduction of the runtime from to .
Our findings open room for ensuing research. In particular, the restrictions of submodular functions to suitable domains introduced in this paper may provide even better bounds on the runtime for our algorithms. Furthermore, we see potential improvements to the jump steps in Algorithms 1 and 2 that are currently based on Megiddo’s parametric search and contribute a factor of to the runtime. This factor constitutes the remaining gap in runtime between the integral and the fractional quickest transshipment problem. In order to close this gap, future studies may focus on adapting parametric minimum cost flow algorithms akin to the algorithms by Lin and Jaillet [7] and Saho and Shigeno [10].
References
- [1] Mariia Anapolska, Dario van den Boom, Christina Büsing, and Timo Gersing. A faster parametric search for the integral quickest transshipment problem. arXiv preprint arXiv:2505.12975, 2025. doi:10.48550/arXiv.2505.12975.
- [2] Rainer E. Burkard, Karin Dlaska, and Bettina Klinz. The quickest flow problem. Zeitschrift für Operations Research, 37(1):31–58, 1993. doi:10.1007/BF01415527.
- [3] Lester R. Ford and Delbert R. Fulkerson. Flows in Networks. Princeton University Press, 1962.
- [4] Bruce Hoppe and Éva Tardos. The quickest transshipment problem. Mathematics of Operations Research, 25(1):36–62, 2000. doi:10.1287/moor.25.1.36.15211.
- [5] Satoru Iwata and James B Orlin. A simple combinatorial algorithm for submodular function minimization. In Proceedings of the twentieth annual ACM-SIAM symposium on Discrete algorithms, pages 1230–1237. SIAM, 2009. doi:10.1137/1.9781611973068.133.
- [6] Yin Tat Lee, Aaron Sidford, and Sam Chiu-Wai Wong. A faster cutting plane method and its implications for combinatorial and convex optimization. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), pages 1049–1065. IEEE, 2015. doi:10.1109/FOCS.2015.68.
- [7] Maokai Lin and Patrick Jaillet. On the quickest flow problem in dynamic networks–a parametric min-cost flow approach. In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 1343–1356. SIAM, SIAM, 2015. doi:10.1137/1.9781611973730.89.
- [8] Nimrod Megiddo. Combinatorial optimization with rational objective functions. In Tenth annual ACM symposium on Theory of computing (STOC), pages 1–12, 1978. doi:10.1145/800133.804326.
- [9] James Orlin. A faster strongly polynomial minimum cost flow algorithm. In Twentieth annual ACM symposium on Theory of Computing (STOC), pages 377–387, 1988. doi:10.1145/62212.62249.
- [10] Masahide Saho and Maiko Shigeno. Cancel-and-tighten algorithm for quickest flow problems. Networks, 69(2):179–188, 2017. doi:10.1002/net.21726.
- [11] Miriam Schlöter. Flows over time and submodular function minimization. PhD thesis, Dissertation, Berlin, Technische Universität Berlin, 2018, 2018.
- [12] Miriam Schlöter, Martin Skutella, and Khai Van Tran. A faster algorithm for quickest transshipments via an extended discrete newton method. In 2022 Symposium on Discrete Algorithms (SODA), pages 90–102. SIAM, 2022. doi:10.1137/1.9781611977073.5.
- [13] Martin Skutella. An introduction to network flows over time. Research Trends in Combinatorial Optimization: Bonn 2008, pages 451–482, 2009. doi:10.1007/978-3-540-76796-1_21.
- [14] Martin Skutella. An introduction to transshipments over time. arXiv preprint arXiv:2312.04991, 2023. doi:10.48550/arXiv.2312.04991.
- [15] Donald M. Topkis. Minimizing a Submodular Function on a Lattice. Operations Research, 26(2):305–321, 1978. doi:10.1287/opre.26.2.305.
