Abstract 1 Introduction 2 Preliminaries 3 Restricting the Domains of Violated Sets 4 Strong Map Sequences 5 An Improved Parametric Search 6 Conclusion and Outlook References

A Faster Parametric Search for the Integral Quickest Transshipment Problem

Mariia Anapolska ORCID Combinatorial Optimization, RWTH Aachen University, Germany Dario van den Boom111Corresponding Author ORCID Combinatorial Optimization, RWTH Aachen University, Germany Christina Büsing ORCID Combinatorial Optimization, RWTH Aachen University, Germany Timo Gersing ORCID Combinatorial Optimization, RWTH Aachen University, Germany
Abstract

Algorithms for computing fractional solutions to the quickest transshipment problem have been significantly improved since Hoppe and Tardos first solved the problem in strongly polynomial time. For integral solutions, however, no structural improvements on their algorithm itself have yet been proposed. Runtime improvements are limited to general progress on submodular function minimization (SFM), which is an integral part of Hoppe and Tardos’ algorithm. In fact, SFM constitutes the main computational load of the algorithm, as the runtime is blown up by using it within Megiddo’s parametric search algorithm. We replace this part of Hoppe and Tardos’ algorithm with a more efficient routine that solves only a linear number of SFM and, in contrast to previous techniques, exclusively uses minimum cost flow algorithms within Megiddo’s parametric search. Our approach improves the state-of-the-art runtime from 𝒪~(m4k15) down to 𝒪~(m2k5+m4k2)222We use 𝒪~ to suppress polylogarithmic terms., where k is the number of terminals and m is the number of arcs.

Keywords and phrases:
Flow over time, dynamic transshipment, quickest transshipment, parametric submodular functions, efficient algorithms
Copyright and License:
[Uncaptioned image] © Mariia Anapolska, Dario van den Boom, Christina Büsing, and Timo Gersing; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Network flows
Related Version:
Full Version: https://arxiv.org/abs/2505.12975 [1]
Acknowledgements:
We would like to express our gratitude to the anonymous reviewers for their valuable feedback and constructive comments, which have greatly contributed to improving the presentation of this paper.
Funding:
This work is co-funded by the German research council (DFG) – project number 442047500 – Collaborative Research Center Sparsity and Singular Structures (SFB 1481) and by the German research council (DFG) Research Training Group 2236 UnRAVeL.
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

Network flows over time, also referred to as dynamic flows, extend classical static network flows by a time component. They provide a powerful tool for modeling real-world problems in traffic engineering, building evacuation, and logistics. Over the last decades, a wide range of optimization problems dealing with flows over time have been studied. The maximum flow over time problem was studied in the seminal work of Ford and Fulkerson [3], who showed that the problem can be solved in polynomial time by a reduction to the static minimum cost flow problem. The quickest flow problem asks for the minimum time-horizon T such that a provided demand of D units of flow can be sent from a single source s to a single sink t. While a straightforward approach is to combine an algorithm for maximum flows over time with a parametric search algorithm [2], recent results have shown that cost scaling algorithms for minimum cost flows can be modified in order to efficiently compute quickest flows [7, 10].

The quickest transshipment problem generalizes the quickest flow problem by allowing for supply and demand at multiple sources and sinks. It is one of the most fundamental problems in the field of network flows over time and, as recently stated by Skutella [14], “arguably the most difficult flow over time problem that can still be solved in polynomial time.” As in the quickest flow problem, our goal is to send the required flow from sources to sinks while simultaneously minimizing the time horizon.

Similar to the quickest flow problem, the quickest transshipment problem can be solved by determining the minimum time horizon via parametric search. Using this idea, Hoppe and Tardos [4] showed that the quickest transshipment problem can be solved in strongly polynomial time, that is, their algorithm’s runtime is polynomially bounded in the number of nodes n, number of arcs m, and the combined number of sources and sinks k.

Recently, faster algorithms have been developed. Notably, Schlöter, Skutella, and Tran [12] proposed an algorithm with a time complexity of 𝒪~(m2k5+m3k3+m3n). Unfortunately, these performance improvements come at the expense of fractional solutions, which may be undesirable for applications that do not allow flow particles to be disassembled. While some of the results speed up the search for the optimal time horizon, no improvements have yet been proposed for finding integral flows over time. Hence, the state-of-the-art complexity for the integral quickest transshipment problem remains unchanged at 𝒪~(m4k15) [12].

Our Contribution

We propose improvements to Hoppe and Tardos’ algorithm by replacing two computationally expensive subroutines in which submodular function minimization is combined with Megiddo’s parametric search. By doing so, we reduce the state-of-the-art runtime for the integral quickest transshipment problem from 𝒪~(m4k15) to 𝒪~(m2k5+m4k2). This narrows the gap to the fractional quickest transshipment problem, which can be solved in O~(m2k5+m3k3+m3n) time using the algorithm by Schlöter, Skutella, and Tran [12].

2 Preliminaries

Given a directed graph G=(V,A) with vertices V and arcs A, we define a dynamic network as a triple 𝒩=(G,u,τ) with capacity ua and transit time τa0 for each arc aA(𝒩). For a given dynamic network 𝒩, the set V(𝒩) denotes the network’s nodes, while A(𝒩) refers to the network’s arcs. Throughout this paper, we denote the number of nodes |V(𝒩)| by n and the number of arcs |A(𝒩)| by m.

A flow over time is a family of functions fa:[0,T)0, representing the in-flow rates for each arc aA(𝒩) for every point in time until the end of the time horizon T. The fa(θ)-many flow units entering arc a at time θ[0,T) arrive at time θ+τa at the end node of a. While we only require fa to satisfy weak flow conservation, Hoppe and Tardos’ algorithm computes an optimal solution satisfying even the strict flow conservation, meaning that all flow units entering a non-sink node are immediately forwarded via an outgoing arc. We refer the reader to [13] for more details on flows over time.

Note that, in contrast to Hoppe and Tardos [4] who defined a flow over time as a static flow in a time-expanded network, we use a continuous-time model. However, all prerequisites for their algorithm directly translate from the discrete- to continuous-time model as shown in [14], meaning that it can also be implemented for the continuous-time model.

For the dynamic transshipment problem, we have a triple (𝒩,b,T) comprising a dynamic network 𝒩, a balance function b:V(𝒩) with vV(𝒩)b(v)=0, and a time horizon T. The balances describe how supply and demand are distributed across the network. A node with positive balance b(v)>0 is a source, while a node with negative balance is a sink. Let S+ denote the set of sources, S the set of sinks, and S=S+S the set of terminals. A dynamic transshipment instance is feasible if there exists a flow over time sending the supply from the sources to the sinks such that all demands are satisfied.

Definition 1.

Given a subset of terminals XS, the maximum out-flow o(X) out of X is the value of the maximum flow over time from the sources S+X to the sinks SX.

The central feasibility criterion states that the net balance b(X)vXb(v) must not exceed the maximum out-flow o(X) for every XS.

Theorem 2 (Feasibility Criterion [4]).

The dynamic transshipment instance (𝒩,b,T) is feasible if and only if v(X)o(X)b(X)0 for all XS.

We call a set XS with v(X)<0 a violated set. In order to determine the feasibility of a given dynamic transshipment instance, it suffices to show that no violated set exists. However, while the value of o(X), and thus of v(X), can be computed with the Ford-Fulkerson algorithm for maximum flows over time, avoiding the enumeration of all subsets XS is not obvious. Fortunately, we can use the submodularity of o:2S0 and v:2S.

Definition 3 (Submodular Function).

A function f:2S over a finite ground set S is a submodular function if for all X,YS it holds that

f(XY)+f(XY)f(X)+f(Y), (1)

or, equivalently, if for all sS and XYS{s} it holds that

f(Y{s})f(Y)f(X{s})f(X). (2)

Given a submodular function f over S, we call a set XargminXSf(X) a minimizer of f. It is well-known that the set of minimizers of a submodular function is closed under union and intersection. Therefore, there always exists a minimal minimizer and a maximal minimizer, which are the intersection and union of all minimizers, respectively.

In the context of submodular function minimization (SFM), a submodular function f is typically provided in form of an evaluation oracle with complexity 𝒪(EO). The performance of algorithms is measured in the number of oracle calls required for finding a minimizer. The fastest strongly polynomial algorithm for submodular function minimization is due to Lee, Sidford and Wong [6] with a runtime of 𝒪(k3log2k𝒪(EO)+k4log𝒪(1)k) for k=|S|.

For our purpose, the evaluation oracle computes a maximum out-flow o(X) out of terminals XS using the Ford-Fulkerson algorithm for maximum flows over time. To this end, we introduce a super-source s+ and a super-sink s and connect them to sources sS+X and sinks tSX, respectively, via infinite-capacity, zero-transit arcs. Then the maximum out-flow o(X) is the value of the maximum flow over time from s+ to s. This value can be computed using a static min-cost flow in 𝒪(mlogn(m+nlogn)) or 𝒪~(m2) time via Orlin’s algorithm [9]. We abbreviate this runtime as 𝒪(MCF(n,m)).

Consequently, determining whether a dynamic transshipment instance (𝒩,b,T) is feasible takes 𝒪(k3log2kMCF(n,m)+k4log𝒪(1)k) or 𝒪~(m2k3) time, where k=|S| is the number of terminals. To improve readability, we denote the time it takes to check if an instance with k terminals, n nodes and m arcs is feasible by 𝒪(SFM(k,n,m)).

The Algorithm by Hoppe and Tardos

The algorithm by Hoppe and Tardos [4] was the first strongly polynomial algorithm computing quickest transshipments and remains the most efficient one for integral solutions. In the following, we assume that the minimum time horizon T is provided and focus on the algorithm’s segment that computes an integral dynamic transshipment. The algorithm relies on the concept of tight orders.

Definition 4 (Tight Set and Order).

A set of terminals XS is called tight if o(X)=b(X) holds. Let be a total order on S. We call tight if {tStt} is tight for all tS.

The general idea by Hoppe and Tardos is to construct an equivalent dynamic transshipment instance for which a tight order exists.

Theorem 5 (Reduction to Lex-Max Flows [4]).

Given a dynamic transshipment instance (𝒩,b,T) with a tight order over the terminals S, an integral dynamic transshipment satisfying b can be computed as a lex-max flow over time in 𝒪(kMCF(n,m)) time.

We refer to [14] for more details on lex-max flows over time. Although computing the integral flow is quite efficient, transforming (𝒩,b,T) into an equivalent instance (𝒩,b,T) with a tight order is computationally demanding. We propose improvements to this transformation in Section 5. Before that, we briefly discuss the approach by Hoppe and Tardos.

First, one adds a new terminal sˇ with b(sˇ)=b(s) for every terminal sS and then sets b(s)=0. Note that the set of terminals S now only contains the nodes sˇ. By adding infinite-capacity, zero-transit arcs (sˇ,s) for sˇS+ and (s,sˇ) for sˇS, one ensures that the resulting dynamic transshipment instance is equivalent to the original instance (cf. Figure 1).

To construct an instance admitting a tight order, we iteratively shift supply / demand from terminals sˇ to new terminals s^. In this paper, we call the sˇ drained terminals and the s^ filled terminals. We refer the reader to Hoppe and Tardos [4] and the description in the extended paper [1] for a general overview of the algorithm. We focus on the two subroutines, MaximizeAlpha and MinimizeDelta, which compute the amount of supply / demand that is shifted from sˇ to s^.

When discussing both subroutines, we assume that a feasible dynamic transshipment instance (𝒩,b,T) is given, with a set of terminals S consisting of the drained terminals sˇ, added in the first step, and all filled terminals s^ that were introduced in previous iterations. Furthermore, we are given two tight sets QRS and a drained terminal sˇRQ satisfying o(Q{sˇ})>b(Q{sˇ}).

Note that the instance (𝒩,b,T), its set of terminals S as well as Q, R, and sˇ are the input of our subroutines and vary between iterations. In contrast, every subroutine adds a new filled terminal. However, since both subroutines are discussed in isolation, we simplify the notation by referring to both filled terminals as s^. Throughout this paper, we define Q^Q{s^} and R^R{s^}. Similar to Hoppe and Tardos, we only discuss the case in which sˇ is a source, since the treatment of sinks is symmetrical and is sketched in the extended paper [1].

Capacity-Parametric Instances

Figure 1: A dynamic transshipment instance (𝒩,b,T) after initialization with terminal sˇ (left) and the corresponding α-parametric instance (𝒩α,bα,T) for MaximizeAlpha (right).

The first subroutine starts with tight sets Q and R and a drained source sˇRQ for which Q{sˇ} is not tight. Its aim is to reassign as much supply as possible to a new filled source s^. In doing so, we capacitate the out-flow of s^ such that Q^ is tight and the transformed instance remains feasible.

Definition 6 (α-Parametric Dynamic Network).

Given a dynamic network 𝒩, a drained source sˇS+ and a parameter α0, the α-parametric network 𝒩α is constructed by adding a new filled source s^ and connecting it to s via an α-capacity, zero-transit arc (s^,s).

Let oα:2S{s^}0 be the parametric counterpart of the maximum out-flow as in Definition 1 in the parametric network 𝒩α. Using this notation, we define α-parametric dynamic transshipment instances analogously to Hoppe and Tardos [4]. The construction of an α-parametric instance is illustrated in Figure 1.

Definition 7 (α-Parametric Dynamic Transshipment Instance).

Given a feasible dynamic transshipment instance (𝒩,b,T), two tight sets of terminals QRS, a drained source sˇRQ and a parameter α0, the corresponding α-parametric dynamic transshipment instance (𝒩α,bα,T) consists of the following components.

  • An α-parametric dynamic network 𝒩α as in Definition 6.

  • An α-parametric balance function bα with bα(t)=b(t) for all terminals tS{sˇ}, bα(s^)=Δα, and bα(sˇ)=b(sˇ)Δα, where Δαoα(Q^)oα(Q).

We call a parameter value α0 feasible if the corresponding α-parametric dynamic transshipment instance (𝒩α,bα,T) is feasible. Determining whether a value α is feasible is equivalent to checking if a violated set XS{s^} exists. Recall that this can be done by minimizing the parametric submodular function vα(X)=oα(X)bα(X). A subroutine of the algorithm by Hoppe and Tardos finds a maximum feasible parameter value α0.

Definition 8 (MaximizeAlpha).

Given an α-parametric dynamic transshipment instance, find the maximum feasible α0.

We denote the maximum feasible parameter value by α. Hoppe and Tardos concluded that, given a feasibility oracle for (𝒩α,bα,T) taking 𝒪(SFM(k,n,m)) time, the value α can be found in 𝒪(log(nUmax)SFM(k,n,m)) time, where UmaxmaxaA(𝒩)ua. In addition, one can combine Megiddo’s parametric search [8] with the combinatorial SFM algorithm by Orlin and Iwata [5] to achieve a strongly polynomial runtime of 𝒪~(m4k14). We mainly improve upon the strongly polynomial approach.

Transit-Parametric Instances

Figure 2: A dynamic transshipment instance (𝒩,b,T) after initialization with terminal sˇ (left) and the corresponding δ-parametric instance (𝒩δ,bδ,T) for MinimizeDelta (right).

The second subroutine is closely related to MaximizeAlpha. Again, we start with tight sets Q and R and a drained source sˇRQ for which Q{sˇ} is not tight. The aim is to reassign as much supply as possible to a new filled source s^. In doing so, we set the transit time for the flow out of s^ such that Q^ is tight and the transformed instance remains feasible.

Definition 9 (δ-Parametric Dynamic Network).

Given a dynamic network 𝒩, a drained source sˇS+ and a parameter δ0, the δ-parametric network 𝒩δ is constructed by adding a terminal s^ and connecting it to s via a unit-capacity, δ-transit arc (s^,s).

Analogously to oα, we denote the maximum out-flow of a subset of terminals XS{s^} by oδ(X). Next, we combine this parametric variant of our dynamic network with a parametric balance function to form a δ-parametric dynamic transshipment instance (cf. Figure 2).

Definition 10 (δ-Parametric Dynamic Transshipment Instance).

Given a feasible dynamic transshipment instance (𝒩,b,T), two tight sets of terminals QRS, a drained source sˇRQ and a parameter δ0, a δ-parametric dynamic transshipment instance (𝒩δ,bδ,T) consists of the following components.

  • A δ-parametric dynamic network 𝒩δ as given in Definition 9.

  • A δ-parametric balance function bδ with bδ(t)=b(t) for all terminals tS{sˇ}, bδ(s^)=Δδ, and bδ(sˇ)=b(sˇ)Δδ, where Δδoδ(Q^)oδ(Q).

Again, a parameter value δ0 is feasible if the corresponding parametric dynamic transshipment instance (𝒩δ,bδ,T) is feasible. This can be checked by minimizing the parametric submodular function vδ(X)=oδ(X)bδ(X). Again, we call a set XS with vδ(X)<0 violated. This yields the following parametric search problem.

Definition 11 (MinimizeDelta).

Given a δ-parametric dynamic transshipment instance, find the minimum feasible δ0.

Again, δ denotes the minimum feasible parameter value. Hoppe and Tardos showed that the optimal value δ can be found in 𝒪(log(T)SFM(k,n,m)) time, or in strongly polynomial time of 𝒪~(m4k14) using the parametric search of Megiddo [8].

The algorithm by Hoppe and Tardos performs a total of k iterations, each of which calls both subroutines MaximizeAlpha and MinimizeDelta once. Therefore, the current version of the algorithm by Hoppe and Tardos takes 𝒪(klog(nTUmax)SFM(k,n,m)) time if both subroutines are implemented with a binary search, while Megiddo’s parametric search results in a runtime of 𝒪~(m4k15). In the following sections, we improve the runtime of each iteration by introducing better parametric search algorithms for both subroutines.

3 Restricting the Domains of Violated Sets

Both problems MaximizeAlpha and MinimizeDelta introduced in the previous section rely on minimizing submodular functions to determine whether a parameter value is feasible. Even with state-of-the-art algorithms for SFM, this remains a computationally expensive subroutine that scales poorly with the number of terminals in the ground set.

We show that, if our parametric instance is infeasible, there always exists a violated set XS{s^} satisfying Q^XR^, which allows us to restrict the ground sets of our parametric submodular functions vα and vδ. This provides a practical improvement and establishes the foundation for the following sections.

Lemma 12.

Let XS{s^} be a violated set for an α-parametric dynamic transshipment instance with respect to a drained source sˇ. Then s^X and sˇX. The same applies to δ-parametric dynamic transshipment instances.

Proof.

We only prove the statement for α-parametric instances, as the reasoning is analogous for δ-parametric instances. Let XS{s^,sˇ} be an arbitrary subset of terminals. Given that we assume the transshipment instance to be feasible, we have v(X)0. Remember that the maximum out-flow oα(X) is the value of a maximum flow over time from a super-source s+ to a super-sink s, where infinite-capacity, zero-transit arcs are used to connect s+ to every source in S+X and to connect every sink in SX to s. Using this definition, we derive the following observations regarding o(X):

  1. O1

    If X contains neither s^ nor sˇ, then maximum out-flow oα(X) coincides with the out-flow o(X) in the original instance.

  2. O2

    If X contains sˇ, then all flow traveling from s+ to s can bypass the α-capacity arc (s^,s) and move along the infinite-capacity arcs (s+,sˇ) and (sˇ,s) instead. As a consequence, we have oα(X)=o(X).

We refer the reader back to Figure 1 for a visual intuition. Combining these observations with the parametric balances bα(s^)=Δα and bα(sˇ)=b(sˇ)Δα, we prove Lemma 12 by considering the following three complementary cases.

  1. (I)

    If s^X and sˇX, then it follows that

    bα(X)=bα(X{s^,sˇ})+bα(s^)+bα(sˇ)=Def. of bαbα(X{s^,sˇ})+b(sˇ)=b(X{s^})=b(X).

    Together with Observation 2 we obtain vα(X)=v(X)0, meaning that X cannot be a violated set.

  2. (II)

    If s^X and sˇX, then it follows that bα(X)=b(X)Δα<b(X), which, together with Observation 2, implies that X cannot be a violated set since vα(X)v(X)0.

  3. (III)

    If s^X and sˇX, then bα(X)=b(X) and therefore X cannot be a violated set as it follows from Observation 1 that vα(X)=oα(X)bα(X)=o(X)b(X)=v(X)0.

Hence, X can only be a violated set of (𝒩α,bα,T) if s^X and sˇX.

Hoppe and Tardos [4] proved the property from Lemma 12 for the special case of the infeasible parameter value δ1 and used it to show that there exists a set X satisfying Q^XR^ which is violated for δ1 and tight for δ. We employ similar arguments to show that this also holds for all infeasible α,δ0.

Lemma 13.

Let α0 be an infeasible parameter value for MaximizeAlpha. Then there is a minimizer X of vα with Q^XR^. Analogously, let δ0 be an infeasible parameter value for MinimizeDelta. Then there is a minimizer X of vδ with Q^XR^.

Proof.

We only prove the statement for MaximizeAlpha, as the proof for MinimizeDelta is analogous. Let X be an arbitrary minimizer of vα with vα(X)<0. It follows from Lemma 12 that s^X and sˇX. Next, we show that the set XQ(XR^) is also a minimizer of vα. For this, we analyze the tightness of the sets Q, Q^, and R^:

  • Q was chosen to be a tight set for (𝒩,b,T). This also does not change in the parametric instance as s^,sˇQ and hence vα(Q)=v(Q)=0.

  • Q^ is tight, since bα(Q^)=bα(Q)+Δα=Q tightoα(Q)+oα(Q^)oα(Q)=oα(Q^).

  • R^ is tight because R is tight and s^,sˇR^ directly imply oα(R^)=o(R)=b(R)=bα(R^).

Having shown that Q and R^ are tight sets, we study how tight sets and minimizers of vα behave under union and intersection. For this purpose, let Y{Q,R^}. It follows from submodularity of vα that

vα(XY)+vα(XY)vα(X)+vα(Y)=vα(X)<0. (3)

Note that both summands on the left hand side are non-positive, since otherwise one of them would be smaller than the minimum value vα(X). Hence, either both summands are negative, or one is equal to zero while the other equals the minimum value of vα. In other words, exactly one of the following properties must hold:

  1. (1)

    Both XY and XY are violated sets of vα.

  2. (2)

    Either XY or XY is a minimizer, while the other set is tight.

We apply this case distinction to the cases where Y=Q and Y=R^:

  • Let Y=R^, then Lemma 12 states that a violated set cannot contain sˇ, implying that vα(XR^)0 since sˇR^. This means that XR^ is a minimizer of vα.

  • Consider Y=Q and the minimizer XR^. We rule out Q(XR^)=QX as a violated set since s^QX. Therefore, Q(XR^) is a minimizer.

Finally, observe that the minimizer X=Q(XR^) not only satisfies Q^XR^ because s^XR^ but also Q^XR^ since Q^ and R^ are tight.

Lemma 13 allows us to determine feasibility of a parameter value by minimizing the restricted functions v~α:2R^Q^ and v~δ:2R^Q^ with v~α(X)vα(Q^X) and v~δ(X)vδ(Q^X). For future use, we define the functions o~α, o~δ, b~α, and b~δ analogously.

This brings us to the main result in this section.

Corollary 14.

A parameter value α0 is feasible for MaximizeAlpha if and only if v~α(X)0 for every XR^Q^. Analogously, a parameter value δ0 is feasible for MinimizeDelta if and only if v~δ(X)0 for every XR^Q^.

The obvious advantage of this result is the reduction of the domain of the submodular functions of interest to R^Q^. Although we have R=S and Q= at the beginning, the difference becomes smaller over the course of Hoppe and Tardos’ algorithm, until both sets are eventually equal. This yields a significant practical speed-up in later iterations of the algorithm. Moreover, the reduction brings further practical and theoretical advantages, which we discuss in the following section.

4 Strong Map Sequences

We concluded the previous section with a useful restriction of our submodular functions vα and vδ to sets X with Q^XR^, while maintaining the guarantee that our parametric instance is feasible if and only if the minimizer of our restricted function is not violated. Building on this, we show that both restricted functions satisfy the strong map property (also known as decreasing differences property).

Definition 15 (Strong Map Property [11]).

Let f1,f2:2E be two submodular functions defined over the same finite ground set E. We write f1f2, or f2f1, if XYE implies

f1(Y)f1(X)f2(Y)f2(X).

The relation is called a strong map. Submodular functions f1,f2,,fk form a strong map sequence if f1f2fk.

Recall that the minimizers of submodular functions are closed under union and intersection, meaning that there exists a unique minimal and maximal minimizer for every submodular function. A result by Topkis [15] relates the minimal and maximal minimizers of functions that form strong map sequences.

Lemma 16 (Minimizers for Strong Map Sequences [11]).

Let f1,f2:2E be two submodular functions over the same ground set E with f1f2. Denote by X1min and X1max the minimal and maximal minimizer of f1, respectively, while X2min and X2max are the minimal and maximal minimizer of f2, respectively. Then X1minX2min and X1maxX2max.

Lemma 16 already gives an intuition for how the strong map property can be used for the parametric search problems MaximizeAlpha and MinimizeDelta: each choice of α or δ gives us a new submodular function, and, if these functions form strong map sequences, the strong map property implies that we can only encounter at most |S| distinct minimal minimizers for a monotonic sequence of parameter values.

Unfortunately, neither vα nor vδ satisfy the strong map property as is222In nontrivial cases, the nested sets , {s^}, and {s^,sˇ} satisfy oα+1({s^})oα+1()>oα({s^})oα() and oα+1({s^,sˇ})oα+1({s^})<oα({s^,sˇ})oα({s^}). The same holds for oδ. Consequently, our proofs for Lemma 17 and Lemma 18 fail for the general functions vα and vδ.. However, this changes when considering the restricted functions v~α and v~δ defined in Section 3. Recall that these functions are defined over the ground set R^Q^, and that their definition ensures that s^ is implicitly added, while sˇ is excluded.

Lemma 17.

Let 0αα. Then v~αv~α.

Proof.

Our argument is structured as follows: We first prove that o~αo~α+1 holds for all α0 and then use this result to prove that v~α also forms a strong map sequence with v~αv~α+1. The general claim v~αv~α then immediately follows by transitivity of .

Figure 3: The auxiliary network 𝔑 for MaximizeAlpha.

Let α0 be arbitrary but fixed. We construct an auxiliary network 𝔑 with nodes V(𝔑)V(𝒩α){𝔰} and arcs A(𝔑)A(𝒩α){𝔞(𝔰,s)}, where s is the source that was replaced by sˇ in the first phase of the algorithm. For the new arc, we set u𝔞=1 and τ𝔞=0. An example network for a given source s^ is depicted in Figure 3.

Let 𝔬(X) be the max out-flow function for network 𝔑 restricted to the domain R^Q^{𝔰}. Notice how adding 𝔰 to a set of terminals XR^Q^{𝔰} with s^X and sˇX is equivalent to increasing α by one, as flow cannot bypass the arcs (𝔰,s) and (s^,s) through (sˇ,s). Formally, it holds for every set X with XR^Q^ that

  1. 1.

    𝔬(X)=o~α(X), and

  2. 2.

    𝔬(X{𝔰})=o~α+1(X).

Clearly, the function 𝔬 is also submodular. Hence, given two sets XYR^Q^, the definition of submodularity in Equation 2 can be restated as

𝔬(Y{𝔰})𝔬(X{𝔰})𝔬(Y)𝔬(X). (4)

Combining Equation 4 with all our previous observations, it follows that the sets X and Y satisfy

o~α+1(Y)o~α+1(X)Obs. 2=𝔬(Y{𝔰})𝔬(X{𝔰})Eq. (4)𝔬(Y)𝔬(X)Obs. 1=o~α(Y)o~α(X).

Hence, we have shown that o~αo~α+1 holds for every parameter value α. Finally, recall our definition of v~α as v~α(X)=o~α(X)b~α(X) for any XR^Q^. Due to the strong map property of o~α, we get for all XYR^Q^ that

v~α+1(Y)v~α(Y)=o~α+1(Y)o~α(Y)+b~α(Y)b~α+1(Y)=o~α+1(Y)o~α(Y)+ΔαΔα+1o~α+1(X)o~α(X)+ΔαΔα+1=o~α+1(X)o~α(X)+b~α(X)b~α+1(X)=v~α+1(X)v~α(X),

therefore proving that v~αv~α+1. Thus, by transitivity of , our claim v~αv~α holds.

Next, we utilize a similar argument to prove that v~δ forms a strong map sequence. Here, we rely on our specific definition of the δ-parametric network 𝒩δ.

Lemma 18.

Let 0δδ. Then v~δv~δ.

Proof.

The general approach is identical to the proof of Theorem 17. That is, we show that o~δ forms a strong map sequence with o~δ1o~δ for δ1. Afterwards, we transfer this result to v~δ1 and v~δ. The general claim then directly follows from the transitivity of .

Figure 4: The auxiliary network 𝔑 for MinimizeDelta.

Again, let δ be arbitrary but fixed. We construct an auxiliary network 𝔑 with nodes V(𝔑)V(𝒩δ){𝔰} and arcs A(𝔑)(A(𝒩δ){(s^,s)}){𝔞(𝔰,s),a^(s^,𝔰)}, s is the source that was replaced by sˇ in the first step of the algorithm. Additionally, let ua^=u𝔞=1, τ𝔞=δ1 and τa^=1. An example for a source s^ is given in Figure 4.

Again, 𝔬(X) denotes the maximum out-flow function for this network restricted to the domain R^Q^{𝔰}. In this construction, we have replaced the arc (s^,s) by a path consisting of arcs (s^,𝔰) and (𝔰,s) with a combined transit time of δ. As a consequence, the maximum out-flow 𝔬(X) remains unchanged for sets XR^Q^. If 𝔰X and s^X, on the other hand, both sources compete for access to the arc (𝔰,s). Then, the flow out of X is maximized by sending all flow from 𝔰. Formally, it holds for every set XR^Q^ that

  1. 1.

    𝔬(X)=o~δ(X), and

  2. 2.

    𝔬(X{𝔰})=o~δ1(X)

These observations plus the second definition of submodularity from Equation 2 imply for all sets XYR^Q^ that

o~δ1(Y)o~δ1(X)Obs. 2=𝔬(Y{𝔰})𝔬(X{𝔰})Eq. (2)𝔬(Y)𝔬(X)Obs. 1=o~δ(Y)o~δ(X).

Therefore, the function o~δ forms a strong map sequence with o~δ1o~δ. Finally, by the same arguments as in the proof of Lemma 17, the parametric function v~δ forms a strong map sequence with v~δv~δ as claimed.

In the following section, we use the strong map property and the resulting nested sequences of minimizers to construct new parametric search algorithms for vα and vδ .

5 An Improved Parametric Search

In this section, we adapt Schlöter’s [11] parametric search algorithm for the single-source or single-sink quickest transshipment problem to MaximizeAlpha and MinimizeDelta. We start by showing that v~α and v~δ are monotonic in their respective parameters.

Lemma 19.

Given a set of terminals XR^Q^, both maps αv~α(X) and δv~δ(X) are monotonic in the parameters α and δ, respectively. That is, we have v~α(X)v~α+1(X) and v~δ(X)v~δ+1(X) for all α,δ0.

Proof.

Let α,δ0 be arbitrary but fixed parameter values. We set ZXQ^ and decompose v~α(X) (and, analogously, v~δ(X)) as

v~α(X) =o~α(X)b~α(X) (5)
=o~α(X)(b(Z{s^})+Δα)
=o~α(X)(b(Z{s^})+oα(Q^)oα(Q))
=o~α(X)(b(Z{s^})+o~α()oα(Q))
=o~α(X)o~α()+(oα(Q)b(Z{s^})).

Observe that we can treat the term oα(Q)b(Z{s^}) as constant: since s^,sˇQ, the out-flow oα(Q) is identical for all α0 (or δ0). Similarly, the value of b(Z{s^}) does not depend on α or δ.

We now prove monotonicity using this simplification. For MaximizeAlpha, the monotonicity v~α(X)v~α+1(X) holds if and only if o~α(X)o~α()o~α+1(X)o~α+1() is true, which follows directly from the strong map property o~αo~α+1 shown in the proof of Lemma 17. Similarly, replacing α with δ in Equation 5, it follows from o~δo~δ+1 that o~δ(X)o~δ()o~δ+1(X)o~δ+1() and thus v~δ(X)v~δ+1(X) holds.

The monotonicity and strong map property of v~α and v~δ allow us to introduce new parametric search algorithms for MaximizeAlpha (cf. Algorithm 1) and MinimizeDelta (cf. Algorithm 2). In their core, they follow a rather simple approach similar to algorithms by Schlöter, Skutella, and Tran [12, 11]: we start with an infeasible parameter value α1 or δ1 and a corresponding minimizer X1 of v~α1 or v~δ1. Possible initial values are α1αmax=nUmax and δ10 [4]. Next, we alternate between two steps jump and check until the current minimizer Xi is no longer violated. In the jump step, we compute the largest parameter value αi+1 or the smallest parameter value δi+1 such that the previous minimizer Xi is no longer violated. Note that this step is, in essence, an integral parametric min-cost flow problem with one parametrized arc. Afterwards, the check step finds a minimizer Xi+1 for the new value αi+1 and δi+1.

Algorithm 1 Parametric Search for MaximizeAlpha.
Algorithm 2 Parametric Search for MinimizeDelta.

In the check step, we use the results of the previous sections in order to restrict the search for minimizers to Xi+1Xi. We will show in the following that this restriction is actually feasible. Note that the restriction not only reduces the search space in each iteration to |Xi| terminals, but also limits MaximizeAlpha and MinimizeDelta to at most |S| iterations, since the length of the chain X1X2 is limited by |R^Q^||S|.

Theorem 20.

Algorithm 1 computes the maximum feasible α0 for MaximizeAlpha; Algorithm 2 computes the minimum feasible δ0 for MinimizeDelta. Both algorithms terminate after at most |S| iterations of the while loop.

Proof.

We first show by induction that if αi is infeasible, then there exists a minimizer Xi of v~αi with XiXi1, where X0=R^Q^.

By definition, X1 is a minimizer of v~α1. Let αi and αi+1 be infeasible, and assume that Xi is a minimizer of v~αi. We have αi+1<αi, since otherwise v~αi+1(Xi)Lem. 19v~αi(Xi)<0 contradicts the choice of αi+1 as feasible for Xi. Together with Lemma 17, this implies the relation v~αiv~αi+1. Therefore, we have Xi+1minXiminXi due to Lemma 16, with Xjmin being the minimal minimizer of v~αj for j{i,i+1}. Since αi+1 is infeasible with v~αi+1(Xi+1min)<0, and, by definition, v~αi+1(Xi)0, we even have Xi+1minXi. Hence, Xi+1min is a feasible choice for Xi+1 in Algorithm 1.

Recall that Algorithm 1 terminates after at most |S| iterations. Let αi be the value returned by the algorithm, and let Xi be the minimizer generated in the final iteration i. If αi were not feasible, then Xi would be a minimizer with vαi(Xi)<0. However, since the algorithm terminates, it follows that αi is feasible.

To see that αi is maximum, recall that α1=αmax is infeasible, and thus the jump step is executed at least once. By construction in the jump, the value αi is maximum such that the previous minimizer Xi1 is no longer violated. Hence, for any α>αi, we have vα(Xi1)<0, so α is infeasible. Therefore, αi is the optimal solution to MaximizeAlpha.

The proof for Algorithm 2 is analogous.

The remainder of the section is devoted to the runtime analysis. Our main improvement is due to the upper bound k=|S| on the number of iterations the algorithms execute.

Theorem 21.

Algorithm 1 can be implemented to solve MaximizeAlpha in strongly polynomial time of 𝒪(k[SFM(k,n,m)+MCF(n,m)2]) and in weakly polynomial time of 𝒪(k[SFM(k,n,m)+log(nUmax)MCF(n,m)]).

Proof.

The runtime is determined by the two main steps performed in each iteration:

  • Jump can be implemented using Megiddo’s parametric search [8] or binary search over the range [0,nUmax] in conjunction with a minimum cost flow algorithm. The former results in a runtime of 𝒪(MCF(n,m)2), the latter in 𝒪(log(nUmax)MCF(n,m)).

  • Check minimizes the submodular function vα on the restricted domain. In the worst case, this takes 𝒪(SFM(k,n,m)) time.

Together with the upper bound of k on the iterations of the while loop, the final runtime is 𝒪(k[SFM(k,n,m)+MCF(n,m)2]) or 𝒪(k[SFM(k,n,m)+log(nUmax)MCF(n,m)]).

We obtain an analogous runtime for Algorithm 2.

Theorem 22.

Algorithm 2 can be implemented to solve MinimizeDelta in strongly polynomial time of 𝒪(k[SFM(k,n,m)+MCF(n,m)2]) and in weakly polynomial time of 𝒪(k[SFM(k,n,m)+log(T)MCF(n,m)]).

Proof.

The proof is analogous to that of Theorem 21. The binary search for the minimum feasible value of δ is done on the range [0,T], which yields the different logarithmic term.

We conclude this section with an improved runtime for computing integral dynamic transshipments compared to the state-of-the-art runtime of 𝒪~(m4k15).

Theorem 23.

Given a dynamic transshipment instance (𝒩,b,T), an integral quickest transshipment can be computed in 𝒪~(m2k5+m4k2) time.

Proof.

Hoppe and Tardos [4] already proved that their algorithm for dynamic transshipment terminates after 𝒪(k) iterations, each of which consists of one call of Algorithms 1 and 2. Given the strongly polynomial runtime of 𝒪(k(SFM(k,n,m)+MCF(n,m)2)) for both subroutines, we obtain a worst-case complexity of 𝒪(k2(SFM(k,n,m)+MCF(n,m)2)).

Suppressing the polylogarithmic terms, we have a time complexity of 𝒪~(m2k3) for 𝒪(SFM(k,n,m)) and of 𝒪~(m2) for 𝒪(MCF(n,m)). Overall, we obtain an improved runtime of 𝒪~(m2k5+m4k3) time for the integral dynamic transshipment problem.

In order to compute a quickest transshipment, we have to determine the minimum time horizon first. This can be done by the method by Schlöter, Skutella, and Tran [12] in 𝒪~(m2k5+m3k3+m3n) time. This runtime is dominated by that of our algorithm for computing the corresponding transshipment: since we assume that the network is connected, we have mnk, and thus 𝒪~(m3n)𝒪~(m4) and 𝒪~(m3k3)𝒪~(m4k2). Overall, the time required to compute a quickest integral transshipment is

𝒪~((m2k5+m3k3+m3n)+(m2k5+m4k2))=𝒪~(m2k5+m4k2+m4)=𝒪~(m2k5+m4k2),

which shows Theorem 23.

6 Conclusion and Outlook

In this paper, we propose an improved version of the algorithm by Hoppe and Tardos for the integral quickest transshipment problem. Our approach is based on more efficient parametric search algorithms using the strong map property and yields a substantial reduction of the runtime from 𝒪~(m4k15) to 𝒪~(m2k5+m4k2).

Our findings open room for ensuing research. In particular, the restrictions of submodular functions to suitable domains introduced in this paper may provide even better bounds on the runtime for our algorithms. Furthermore, we see potential improvements to the jump steps in Algorithms 1 and 2 that are currently based on Megiddo’s parametric search and contribute a factor of 𝒪~(m4) to the runtime. This factor constitutes the remaining gap in runtime between the integral and the fractional quickest transshipment problem. In order to close this gap, future studies may focus on adapting parametric minimum cost flow algorithms akin to the algorithms by Lin and Jaillet [7] and Saho and Shigeno [10].

References

  • [1] Mariia Anapolska, Dario van den Boom, Christina Büsing, and Timo Gersing. A faster parametric search for the integral quickest transshipment problem. arXiv preprint arXiv:2505.12975, 2025. doi:10.48550/arXiv.2505.12975.
  • [2] Rainer E. Burkard, Karin Dlaska, and Bettina Klinz. The quickest flow problem. Zeitschrift für Operations Research, 37(1):31–58, 1993. doi:10.1007/BF01415527.
  • [3] Lester R. Ford and Delbert R. Fulkerson. Flows in Networks. Princeton University Press, 1962.
  • [4] Bruce Hoppe and Éva Tardos. The quickest transshipment problem. Mathematics of Operations Research, 25(1):36–62, 2000. doi:10.1287/moor.25.1.36.15211.
  • [5] Satoru Iwata and James B Orlin. A simple combinatorial algorithm for submodular function minimization. In Proceedings of the twentieth annual ACM-SIAM symposium on Discrete algorithms, pages 1230–1237. SIAM, 2009. doi:10.1137/1.9781611973068.133.
  • [6] Yin Tat Lee, Aaron Sidford, and Sam Chiu-Wai Wong. A faster cutting plane method and its implications for combinatorial and convex optimization. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), pages 1049–1065. IEEE, 2015. doi:10.1109/FOCS.2015.68.
  • [7] Maokai Lin and Patrick Jaillet. On the quickest flow problem in dynamic networks–a parametric min-cost flow approach. In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 1343–1356. SIAM, SIAM, 2015. doi:10.1137/1.9781611973730.89.
  • [8] Nimrod Megiddo. Combinatorial optimization with rational objective functions. In Tenth annual ACM symposium on Theory of computing (STOC), pages 1–12, 1978. doi:10.1145/800133.804326.
  • [9] James Orlin. A faster strongly polynomial minimum cost flow algorithm. In Twentieth annual ACM symposium on Theory of Computing (STOC), pages 377–387, 1988. doi:10.1145/62212.62249.
  • [10] Masahide Saho and Maiko Shigeno. Cancel-and-tighten algorithm for quickest flow problems. Networks, 69(2):179–188, 2017. doi:10.1002/net.21726.
  • [11] Miriam Schlöter. Flows over time and submodular function minimization. PhD thesis, Dissertation, Berlin, Technische Universität Berlin, 2018, 2018.
  • [12] Miriam Schlöter, Martin Skutella, and Khai Van Tran. A faster algorithm for quickest transshipments via an extended discrete newton method. In 2022 Symposium on Discrete Algorithms (SODA), pages 90–102. SIAM, 2022. doi:10.1137/1.9781611977073.5.
  • [13] Martin Skutella. An introduction to network flows over time. Research Trends in Combinatorial Optimization: Bonn 2008, pages 451–482, 2009. doi:10.1007/978-3-540-76796-1_21.
  • [14] Martin Skutella. An introduction to transshipments over time. arXiv preprint arXiv:2312.04991, 2023. doi:10.48550/arXiv.2312.04991.
  • [15] Donald M. Topkis. Minimizing a Submodular Function on a Lattice. Operations Research, 26(2):305–321, 1978. doi:10.1287/opre.26.2.305.