Abstract 1 Introduction 2 Competitive Bounds 3 Incremental Maximization for Subclasses References

Incremental Maximization
for a Broad Class of Objectives

Yann Disser ORCID TU Darmstadt, Germany David Weckbecker ORCID TU Darmstadt, Germany
Abstract

We consider incremental maximization problems, where the solution has to be built up gradually by adding elements one after the other. In every step, the incremental solution must be competitive, compared against the optimum solution of the current cardinality. We prove that a competitive solution always exists when the objective function is monotone and β-accountable, by providing a scaling algorithm that guarantees a constant competitive ratio. This generalizes known results and, importantly, yields the first competitive algorithm for the natural class of monotone and subadditive objective functions.

Keywords and phrases:
incremental maximization, competitive analysis, subadditive functions
Copyright and License:
[Uncaptioned image] © Yann Disser and David Weckbecker; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Online algorithms
; Mathematics of computing Combinatorial algorithms ; Mathematics of computing Combinatorial optimization
Funding:
Supported by DFG grant DI 2041/2.
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

Large infrastructure projects are typically implemented sequentially over longer periods of time. In such settings, it makes sense to prioritize completion of parts of the project that already yield some benefit on their own. This raises the general question of how to compute incremental solutions with a good partial benefit throughout the course of the project. We model such settings mathematically by using the abstract framework of the incremental maximization (IncMax) problem.

In the IncMax problem, we are given a countable set of elements U, along with an objective function f:2U0. We generally require f to be monotone, i.e., that f(A)f(B) for all ABU. An incremental solution π is a permutation (e1,e2,) of the elements in U that specifies the order in which the elements are to be included in the solution. The solution after step C is then given by π(C):={e1,,eC}, and we denote the optimum solution value of cardinality C by

Opt(C):=sup{f(A)AU,|A|=C}.

We say that the incremental solution π is ρ-competitive if Opt(C)ρπ(C) for all C. The competitive ratio of π is ρ(π):=inf{ρ1π is ρ-competitive}, the competitive ratio of a problem instance given by U and f is ρ(U,f):=infπρ(π), and the competitive ratio of a set of instances 𝒞 is ρ(𝒞):=sup{ρ(U,f)(U,f)𝒞}.

Clearly, we cannot hope for good incremental solutions in full generality: For example, if f(E) is given by the value of a maximum flow using an edge set E in some underlying graph that consists of a short path of low capacity and a long path of high capacity, then we have to decide whether to invest in the short path to obtain some value early, or in the long path to achieve a high value as quickly as possible, but cannot ensure both simultaneously. More generally, we must demand that the objective function has some form of a diminishing-returns property that requires the value of solutions to be explainable by the value of their parts, i.e., value cannot suddenly emerge upon completion of certain structures (such as paths in the flow example).

Diminishing returns are typically captured by submodular objective functions, i.e., functions f:2U0 with f(A)+f(B)f(AB)+f(AB) for all A,BU. In a classical result, Nemhauser et al. [25] showed that the obvious greedy algorithm, that adds to the current solution S the element e of largest marginal value f(S{e})f(S) in each step, produces a ee1-competitive solution for submodular objectives. The most natural relaxation of submodularity that still intuitively captures diminishing returns is subadditivity, i.e., f(A)+f(B)f(AB) for all A,BU. While subadditivity is a classical and natural property that intuitively seems promising for incremental maximization, no bounds on the compatitive ratio were known to date.

Unable to bound the competitive ratio for subadditive objectives, Bernstein et al. [1] considered accountable functions f instead (see Figure 1 along with the following). A function f:2U0 is called accountable if, for every finite and non-empty set SU, there exists eS with f(S{e})(11|S|)f(S). Intuitively, this property directly requires that the value of a set cannot suddenly emerge when adding its last element. Bernstein et al. [1] gave a (φ+1)-competitive scaling algorithm for accountable objectives, where φ:=(5+1)/2 denotes the golden ratio.

Note that submodular functions are accountable, and many combinatorial maximization problems, such as weighted (multidimensional) matching, set packing, coverage, knapsack, and many more, give rise to accountable objective functions (see [1]). On the other hand, not all subadditive functions are accountable. For a simple example, assume that the elements of U represent identical unit squares, and f(S) is the diameter of the largest square that can be tiled with the elements in SU. Then, f(S)=|S| grows strictly sublinearly in |S| and is thus subadditive, but it is not accountable, as can be seen by considering |S|=4, which yields f(S{e})=1<32=(114)f(S) for all eS. More generally, functions of the form f(A)=g(|A|) with go(|A|) are subadditive, but not accountable.

Disser and Weckbecker [11] considered γ-α-augmentability as an alternative relaxation of submodularity that additionally maintains competitiveness of the greedy algorithm. A function is called γ-α-augmentable if, for all finite A,BU, there exists bB with f(A{b})f(A)γf(AB)αf(A)|B|. In this setting, the greedy algorithm has competitive ratio exactly αγeαeα1.

In this paper, we provide a scaling algorithm that is competitive for an abstract class of functions that relaxes all above classes and includes, for the first time, all subadditive functions.

Figure 1: Relation of different function properties, including fractionally subadditivity (FSA), weighted rank functions (WRF), and functions of bounded submodularity ratio (BSR). For the depicted inclusions to hold simultaneously, we can set β=min{12,γα}.

Our results

We introduce a class of functions based on an equivalent characterization of accountability given by Disser et al. [10]. They showed that a function f:2U0 is accountable if and only if, for every finite SU, there exists an ordering (e1,,e|S|) of S with f({e1,,ei})i|S|f(S) for all i{1,,|S|}. We relax this characterization as follows.

Definition 1.

For β(0,1], a function f:2U0 is called β-accountable if, for every SU, there exists an ordering (e1,,e|S|) of the elements in S such that, for all i{1,,|S|},

f({e1,,ei})βi|S|f(S).

This definition generalizes multiple other function properties (see Figure 1). In particular, accountable functions [10] correspond exactly to 1-accountable functions (by definition and [10]), every (monotone) γ-α-augmentable function [11] is γα-accountable (Proposition 12), and every (monotone) subadditive function is 12-accountable (Proposition 8).

We establish a general upper bound on the competitive ratio of incremental maximization with β-accountable objective via a scaling algorithm parameterized in β, and complement this result by a lower bound that is tight in the limit β0 (see Figure 2).

Theorem 2.

IncMax with a β-accountable objective has competitive ratio

ρ[1β(1+11β+1),12β+1+14β2+1].

Note that, for β=1, the upper bound recovers the bound of φ+12.618 of [1].

Importantly, Theorem 2 yields the first constant upper bound for the natural class of subadditive objective functions (β=12).

Theorem 3.

IncMax with a subadditive objective has competitive ratio at most 2+2.

Furthermore, Theorem 2 also yields the following bound, which improves on the best known upper bound for γ-α-augmentable functions of [11] when γ<αeαeα1.

Theorem 4.

IncMax with a γ-α-augmentable objective has competitive ratio at most

α2γ+1+α24γ2+1.
Figure 2: Plot of the bounds of Theorem 2, and their difference (red).

Related work

The general framework of incremental maximization was first proposed by Bernstein et al. [1]. They assumed that the objective is accountable and showed that in this case the competitive ratio of this problem is at most φ+12.618. Utilizing a continuization technique, Disser et al. [10] showed a lower bound of 2.246 in this setting and provided evidence that the upper bound of φ+1 is tight. Before the general framework was introduced, Zhu et al. [30] considered a special case of the problem where edges have to be added over time in order to maximize the number of internal nodes. They proved an upper bound of 12/7.

The performance of a simple greedy algorithm for this problem was investigated separately. Rado [27] showed that the greedy algorithm performs optimally if the objective function is modular. For submodular objectives, Nemhauser et al. [25] showed that the greedy algorithm has a competitive ratio of at most ee1. This is known to be tight due to Feige [12]. More recently, other classes of objectives were considered. Das and Kempe [6] introduced the submodularity ratio as a relaxation of submodularity and showed that the greedy algorithm has a competitive ratio of at most eγeγ1 for objectives with submodularity ratio γ[0,1]. This was later shown to be tight by Bian et al. [2] who also parametrized the bound by the curvature. A different relaxation of submodularity is α-augmentability. Here, the competitive ratio of the greedy algorithm is known to be at most eαeα1 due to Bernstein et al. [1]. Disser and Weckbecker [11] showed that this upper bound is tight and generalized this result by showing that the greedy algorithm has a competitive ratio of exactly αγeαeα1 if the objective is γ-α-augmentable.

A generalization of the incremental maximization is the problem where, instead of an increasing cardinality constraint, one is faced with an increasing knapsack constraint. Under the assumption that the objective is modular, Megow and Mestre [22] provide an instance-sensitive near-optimal solution. Disser et al. [9] additionally assume that items that do not fit into the knapsack can be discarded and show that the competitive ratio in this setting is exactly 2. This result was later generalized to submodular objectives with a bounded curvature by Klimm and Knaack [20]. Kawase et al. [19] gave an upper bound of 2ee1 on the randomized competitive ratio of the incremental maximization problem under a knapsack constraint with submodular objective under the assumption that items that do not fit can be discarded. Instead of discarding items that do not fit, Navarra and Pinotti [24] assumed that every item fits into the knapsack and provided a 2-competitive algorithm for modular objectives. Disser et al. [8] assumed that the ratio between the largest and smallest value of a single element is bounded by some constant M and showed that the competitive ratio of the incremental maximization problem under a knapsack constraint with a fractionally subadditive objective lies between max{2.618,M} and max{3.293M,2M}.

Disser et al. [7] considered an incremental budget constraint for the prize-collecting Steiner tree problem, where the objective does not obey a diminishing returns property and the authors had to resort to bicriterial approximation guarantees. Other variants of the incremental maximization problem include a variation where the goal is to maximize a sum-objective [18, 14, 29], or incremental solutions in a changing environment [16, 17, 28]. Incremental minimization considered, among others, for the k-center problem [15], the minimum latency problem [3, 13, 4], the k-median problem [23, 5, 21], and the facility location problem [26, 21].

2 Competitive Bounds

The best known algorithm to solve the IncMax problem with an accountable objective was presented in [1]. It follows the idea of calculating cardinalities c0,c1, and adding the optimum solution sets for these cardinalities one after the other, where the order of the elements within each set follows Definition 1. The cardinalities c0,c1, are calculated by setting c0=1 and iteratively scaling via ci+1=δci for some δ>1.

Since β-accountability is closely related to accountability, we introduce a modified version of this algorithm to find an incremental solution for the IncMax problem with a β-accountable objective.

The algorithm Scalingβ uses the scaling parameter

δ=12β+1+14β2+1

and chooses

c1argmaxCOpt(C)C

in an arbitrary, but fixed way. Then, for i, it chooses

ci+1argmaxC,CδciOpt(C)C,

also in an arbitrary, but fixed way. Scalingβ operates in phases, and in phase i, it adds the optimum solution of cardinality ci in the order given by Definition 1.

We state the following observation.

Observation 5.

For all i, we have

  1. (i)

    ci+1δci,

  2. (ii)

    j=1icjδδ1ci,

  3. (iii)

    Opt(ci)ciOpt(c)c for all cδci.

Proof.

(i) and (iii) follow immediately from the definition of ci for all i.

Furthermore, we have

j=1icj(i)j=1i(1δ)ijcij=1(1δ)ijci=δδ1ci.

We are now ready to prove an upper bound on the competitive ratio of Scalingβ.

Theorem 6.

The algorithm Scalingβ is δ-competitive for IncMax with a β-accountable objective function.

Proof.

Let π denote the incremental solution of Scalingβ. Let C and i such that C(j=1i1cj,j=1icj]. If i=1, we let x=0, otherwise, let

x=1βOpt(ci1)Opt(ci)ci+j=1i1cj, (1)

i.e., we have

Opt(ci1)=βxj=1i1cjciOpt(ci).

The value x is chosen such that, if Cx, then the value of the partially added optimum solution of cardinality ci in the solution π(C) is at least as large as the value of the completely contained optimum solution of cardinality ci1.

Case 1: 𝑪<𝜹𝒄𝒊𝟏.

As π(C) contains the optimum solution of cardinality ci1, by monotonicity, we have

Opt(C)f(π(C))Opt(C)Opt(ci1)Opt(δci11)Opt(ci1)Obs. 5 (iii)δci11ci1δ.
Case 2: 𝜹𝒄𝒊𝟏𝑪<𝒙.

Note that C<x implies that x>0, i.e., (1) holds. The solution π(C) contains the optimum solution of cardinality ci1. Thus,

Opt(C)f(π(C)) Opt(C)Opt(ci1)
Obs. 5 (iii) Opt(ci)Opt(ci1)Cci
Opt(ci)Opt(ci1)xci
=(1) 1β+Opt(ci)Opt(ci1)1cij=1i1cj
Obs. 5 (iii) 1β+cici11cij=1i1cj
Obs. 5 (ii) 1β+δδ1=δ,

where the last equality follows from the definition of δ.

Case 3: 𝑪𝜹𝒄𝒊𝟏, 𝑪𝒙.

If i=1, by definition of c1, we have

Opt(C)f(π(C))Opt(C)βCc1Opt(c1)1β<δ.

Now, assume that i2, i.e., (1) holds. The solution π(C) contains Cj=1i1cj elements from the optimum solution of cardinality ci. Thus,

Opt(C)f(π(C)) Opt(C)βCj=1i1cjciOpt(ci)
Obs. 5 (iii) 1βCCj=1i1cj
Cx 1βxxj=1i1cj
=(1) 1β(1+j=1i1cj1βOpt(ci1)Opt(ci)ci)
= 1β+Opt(ci)Opt(ci1)1cij=1i1cj
Obs. 5 (iii) 1β+cici11cij=1i1cj
Obs. 5 (ii) 1β+δδ1
= δ.

We complement this upper bound with a lower bound, that, in particular, shows that for β0, we cannot be better than 1β-competitive.

Theorem 7.

For all β(0,1], the competitive ratio of IncMax with a β-accountable objective function is at least

1β(1+11β+1).
Proof.

Let k:=1β+2 and d:=k1kβ. We define an instance where no incremental solution can have a competitive ratio better than 1d. Let U={e1,,ek+1} be the groundset and f:2U0 be the objective such that, for SU,

f(S)={kdβ=k1,if {e2,,ek+1}S,max{|{e1}S|,|{e2,,ek+1}S|d},else.

This objective is monotone because of the maximum in the definition and because kdβkd. We show that f is β-accountable. For this, let SU. We have to show that there is an ordering (ei1,,ei|S|) of the elements in S such that f({ei1,,eij})βj|S|f(S) for all j{1,,|S|}. If we have e1S and f(S)=1, we can simply choose e1 to be the first element in the ordering and obtain

f({ei1,,eij})=1=f(S)βj|S|f(S)

for all j{1,,|S|}. Otherwise, if e1S or f(S)>1, then, with S:=S{e2,,ek+1}, either f(S)=kdβ=|S|dβ, or f(S)=|S|d|S|dβ. In our ordering, we can put the elements from S in the beginning and, for j{1,,|S|}, obtain

f({ei1,,eij})jd=βj|S||S|dββj|S|f(S)βj|S|f(S).

If S=S, we are done. Otherwise, |S|=|S|+1 holds and, for j=|S|, we have f({ei1,,eij})=f(S)βj|S|f(S).

Let π be an incremental solution for this instance. We consider two cases. First, assume that e1 is not the last element in the ordering π. We have

(k1)d=(1β+1)21β+2β>1β2+21β1β+2β=1ββ1.

The solution π(k) contains e1 and k1 elements from {e2,,ek+1}. Thus,

f(π(k))=max{1,(k1)d}=(k1)d.

The optimum solution of cardinality k is the set {e2,,ek+1} with a value of k1. Thus, in this case, the competitive ratio of π is at least 1d.

Now, consider the other case, i.e., e1 is the last element in the ordering π. Then the solution π(1) contains exactly one element from the set {e2,,ek+1} and has therefore value f(π(1))=d. The optimum solution of cardinality 1 is {e1} with a value of 1. Thus, also in this case, the competitive ratio of π is at least 1d.

In Figure 2, we plot the upper bound from Theorem 6 and the lower bound from Theorem 7 in black, as well as a plot of their difference in red. On the one hand, one can see that both bounds are unbounded for β0. This seems plausible because for β0, we have (almost) no guarantee that the value of large sets is bounded by the value of smaller sets. On the other hand, one can see in Figure 2 that the difference between upper and lower bound is almost 0 for β0. Thus, in the limit β0, Scalingβ performs optimally.

3 Incremental Maximization for Subclasses

We employ the previous result to obtain new and improved bounds on the competitive ratio of IncMax with subadditive and γ-α-augmentable objectives.

3.1 Subadditivity

In order to derive results for IncMax with subadditive objectives, we compare β-accountability and subadditivity.

Proposition 8.

Every monotone, subadditive function is 12-accountable.

Proof.

Let f:2U0 be monotone and subadditive, SU be finite, and k:=|S|. We define :=log2k, i.e., we have 21<k2. Furthermore, we define S:=S and iteratively, for j{1,,0}, SjSj+1 with |Sj|=k2j and f(Sj)12f(Sj+1). This is possible as we will see now. We have 2k2j=2k2jk2j1, i.e., we can choose A,BSj with |A|=|B|=k2j and AB=Sj+1. By subadditivity, we have f(A)12f(Sj+1) or f(B)12f(Sj+1). Thus, we can choose Sj{A,B} with the desired properties.

Now, consider any order (e1,,ek) of S with {e1,,e|Sj|}=Sj for all j{0,,}. Let i{1,,k} and j{0,,} such that k2ji<k2j1. This implies that Sj{e1,,ei} and, because i, that i<k2j1. Together with monotonicity of f, we obtain

f({e1,,ei})f(Sj)(12)jf(S)=12jf(S)12ikf(S),

which yields 12-accountability.

We combine Theorem 6 and Proposition 8 to obtain an upper bound on the competitive ratio of IncMax with a subadditive objective. This immediately proves Theorem 3.

Theorem 3. [Restated, see original statement.]

IncMax with a subadditive objective has competitive ratio at most 2+2.

We complement this upper bound on the competitive ratio of IncMax with subadditive objective functions by a lower bound. To this end, we employ separability of problem instances of incremental maximization, a property introduced in [10].

Definition 9.

An instance of IncMax with objective f:2U0 is called separable if there exist a partition U=U1U2 of U and densities di>0 such that, for all SU,

f(S)=maxi{|SUi|di}.
Proposition 10.

The objective function of every separable problem instance is subadditive.

Proof.

Let a separable instance with objective function f:2U0 be given. Furthermore, let U=U1U2 be a partition of U, and let d1,d2,>0 be the densities such that, for all SU,

f(S)=maxi|SUi|di.

In order to show subadditivity, we fix two sets A,BU. Let i be the index such that f(AB)=|(AB)Ui|di. Then

f(AB) =|(AB)Ui|di
=|AUi|di+|(BA)Ui|di
|AUi|di+|BUi|di
(maxi|AUi|di)+(maxi|BUi|di)
=f(A)+f(B).

This result yields that the (non-strict) competitive ratio of IncMax with subadditive objectives is at least that for separable instances. In [10], the authors show a lower bound of 2.246 on the competitive ratio in the separable setting. Thus, we obtain the following.

Corollary 11.

The competitive ratio of IncMax with a subadditive objective function is at least 2.246.

This bound is weaker than the lower bound of 83 from Theorem 7 for IncMax with a 12-accountable objective. Yet, note that the lower bound construction in the proof of Theorem 7 is not subadditive.

3.2 𝜸-𝜶-Augmentability

We now turn to comparing β-accountability and γ-α-augmentability.

Proposition 12.

For all γ(0,1] and α1, every monotone, γ-α-augmentable function is γα-accountable.

Proof.

For γ(0,1] and α1, let f:2U0 be monotone and γ-α-augmentable. Let SU be finite and k:=|S|. By γ-α-augmentability, there exists an ordering (e1,,ek) of the elements in S such that, for all i{0,,k1},

f({e1,,ei+1})f({e1,,ei})γf(S)αf({e1,,ei})ki.

For i{0,,k}, let Si:={e1,,ei}. Then, for i{0,,k1}, this yields

f(Si+1)f(Si)αγαf(S)f(Si)kiα1γαf(S)f(Si)ki. (2)

To show γα-accountability of f, we prove by induction that, for i{1,,k}, we have

f(Si)γαikf(S). (3)

For i=0, (2) yields

f(S1)1k(γαf(S)f())+f()=γα1kf(S)+(11k)f()γα1kf(S),

where the last inequality follows from non-negativity of f.

Now, suppose that (3) holds for some i{0,,k1}. Then

f(Si+1) (2) f(Si)+γαf(S)f(Si)ki
= γα1kif(S)+(11ki)f(Si)
(3) γα1kif(S)+(ki1ki)γαikf(S)
= γαki+ki2i(ki)kf(S)
= γα(ki)(i+1)(ki)kf(S)
= γαi+1kf(S),

which concludes the induction.

We combine the results from Proposition 12 and Theorem 6 to obtain an upper bound on the competitive ratio of IncMax with a monotone and γ-α-augmentable objective. This immediately proves Theorem 4.

Theorem 13.

For γ(0,1] and α1, Scalingγ/α has a competitive ratio of at most

α2γ+1+α24γ2+1

for IncMax with a γ-α-augmentable objective.

We compare this upper bound on the competitive ratio of IncMax with a γ-α-augmentable objective to

αγeαeα1,

the competitive ratio of the greedy algorithm for IncMax with γ-α-augmentable objectives [11]. Let the ratio between the two upper bounds be denoted by

r(γ,α):=α2γ+1+α24γ2+1αγeαeα1=eα1eα(12+γα+14+γ2α2).

We have limγ0r(γ,α)=eα1eα<1, i.e., for small values of γ, Scalingγ/α performs better than the greedy algorithm. Since γ(0,1] and α1, we have r(γ,α)=1 if and only if γ=αeαe2α1. This value lies in the interval (0,ee21](0,0.426), and for α, approaches 0 (cf. Figure 3). Thus, for large α1, the greedy algorithm performs better than Scalingγ/α for almost all values of γ(0,1]. This is probably due to the fact that γ-α-augmentability is a property that relaxes an inequality that is a core estimate in the analysis of the the greedy algorithm for monotone and submodular functions.

Figure 3: Plot of the value of γ depending on α for which Scalingγ/α and the greedy algorithm achieve the same competitive ratio. For values of γ below this line, Scalingγ/α performs better, and vice versa.

References

  • [1] Aaron Bernstein, Yann Disser, Martin Groß, and Sandra Himburg. General bounds for incremental maximization. Mathematical Programming, 191(2):953–979, 2022. doi:10.1007/s10107-020-01576-0.
  • [2] Andrew An Bian, Joachim M. Buhmann, Andreas Krause, and Sebastian Tschiatschek. Guarantees for greedy maximization of non-submodular functions with applications. In Proceedings of the 34th Interantional Conference on Machine Learning (ICML), pages 498–507, 2017. URL: https://proceedings.mlr.press/v70/bian17a.html.
  • [3] Avrim Blum, Prasad Chalasani, Don Coppersmith, William R. Pulleyblank, Prabhakar Raghavan, and Madhu Sudan. The minimum latency problem. In Proceedings of the 26th Annual ACM Symposium on Theory of Computing (STOC), pages 163–171, 1994. doi:10.1145/195058.195125.
  • [4] Kamalika Chaudhuri, Brighten Godfrey, Satish Rao, and Kunal Talwar. Paths, trees, and minimum latency tours. In 44th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 36–45, 2003. doi:10.1109/SFCS.2003.1238179.
  • [5] Marek Chrobak, Claire Kenyon, John Noga, and Neal E. Young. Incremental medians via online bidding. Algorithmica, 50(4):455–478, 2008. doi:10.1007/s00453-007-9005-x.
  • [6] Abhimanyu Das and David Kempe. Approximate submodularity and its applications: Subset selection, sparse approximation and dictionary selection. Journal of Machine Learning Research, 19:1–34, 2018. URL: https://jmlr.org/papers/v19/16-534.html.
  • [7] Yann Disser, Svenja M. Griesbach, Max Klimm, and Annette Lutz. Bicriterial approximation for the incremental prize-collecting steiner-tree problem. In ESA 2024, pages 47:1–47:16, 2024. doi:10.4230/LIPIcs.ESA.2024.47.
  • [8] Yann Disser, Max Klimm, Annette Lutz, and David Weckbecker. Fractionally subadditive maximization under an incremental knapsack constraint. SIAM Journal on Discrete Mathematics, 38(1):764–789, 2024.
  • [9] Yann Disser, Max Klimm, Nicole Megow, and Sebastian Stiller. Packing a knapsack of unknown capacity. SIAM Journal on Discrete Mathematics, 31(3):1477–1497, 2017. doi:10.1137/16M1070049.
  • [10] Yann Disser, Max Klimm, Kevin Schewior, and David Weckbecker. Incremental maximization via continuization. In 50th International Colloquium on Automata, Languages, and Programming (ICALP), volume 261, pages 47:1–47:17, 2023. doi:10.4230/LIPIcs.ICALP.2023.47.
  • [11] Yann Disser and David Weckbecker. Unified greedy approximability beyond submodular maximization. SIAM Journal on Discrete Mathematics, 38(1):248–379, 2024. doi:10.1137/23M1569265.
  • [12] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634–652, 1998. doi:10.1145/285055.285059.
  • [13] Michel X. Goemans and Jon M. Kleinberg. An improved approximation ratio for the minimum latency problem. Mathematical Programming, 82:111–124, 1998. doi:10.1007/BF01585867.
  • [14] Michel X. Goemans and Francisco Unda. Approximating incremental combinatorial optimization problems. In Proceedings of the 20th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pages 6:1–6:14, 2017. doi:10.4230/LIPIcs.APPROX-RANDOM.2017.6.
  • [15] Teofilo F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985. doi:10.1016/0304-3975(85)90224-5.
  • [16] Jeff Hartline and Alexa Sharp. An incremental model for combinatorial maximization problems. In Proceedings of the 5th International Workshop on Experimental Algorithms (WEA), pages 36–48, 2006. doi:10.1007/11764298.
  • [17] Jeff Hartline and Alexa Sharp. Incremental flow. Networks, 50(1):77–85, 2007. doi:10.1002/net.20168.
  • [18] Thomas Kalinowski, Dmytro Matsypura, and Martin W.P. Savelsbergh. Incremental network design with maximum flows. European Journal of Operational Research, 242(1):51–62, 2015. doi:10.1016/j.ejor.2014.10.003.
  • [19] Yasushi Kawase, Hanna Sumita, and Takuro Fukunaga. Submodular maximization with uncertain knapsack capacity. SIAM Journal on Discrete Mathematics, 33(3):1121–1145, 2019. doi:10.1137/18M1174428.
  • [20] Max Klimm and Martin Knaack. Maximizing a submodular function with bounded curvature under an unknown knapsack constraint. In Proceedings of the 25th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pages 49:1–49:19, 2022. doi:10.4230/LIPIcs.APPROX/RANDOM.2022.49.
  • [21] Guolong Lin, Chandrashekhar Nagarajan, Rajmohan Rajaraman, and David P. Williamson. A general approach for incremental approximation and hierarchical clustering. SIAM Journal on Computing, 39(8):3633–3669, 2010. doi:10.1137/070698257.
  • [22] Nicole Megow and Julián Mestre. Instance-sensitive robustness guarantees for sequencing with unknown packing and covering constraints. In Proceedings of the 4th Innovations in Theoretical Computer Science Conference (ITCS), pages 495–504, 2013. doi:10.1145/2422436.2422490.
  • [23] Ramgopal R. Mettu and C. Greg Plaxton. The online median problem. SIAM Journal on Computing, 32(3):816–832, 2003. doi:10.1137/S0097539701383443.
  • [24] Alfredo Navarra and Cristina M. Pinotti. Online knapsack of unknown capacity: How to optimize energy consumption in smartphones. Theoretical Computer Science, 697:98–109, 2017. doi:10.1016/j.tcs.2017.07.029.
  • [25] George Nemhauser, Laurence Wolsey, and Michael Fisher. An analysis of approximations for maximizing submodular set functions - I. Mathematical Programming, 14:265–294, 1978. doi:10.1007/BF01588971.
  • [26] C. Greg Plaxton. Approximation algorithms for hierarchical location problems. Journal of Computer and System Sciences, 72(3):425–443, 2006. doi:10.1016/j.jcss.2005.09.004.
  • [27] Richard Rado. A theorem on independence relations. The Quarterly Journal of Mathematics, 13:83–89, 1942. doi:10.1093/qmath/os-13.1.83.
  • [28] Clemens Thielen, Morten Tiedemann, and Stephan Westphal. The online knapsack problem with incremental capacity. Mathematical Methods of Operations Research, 83(2):207–242, 2016. doi:10.1007/s00186-015-0526-9.
  • [29] Francisco T. Unda Surawski. Combinatorial Incremental Problems. PhD thesis, Massachusetts Institute of Technology, 2018. URL: https://dspace.mit.edu/bitstream/handle/1721.1/122890/1126788574-MIT.pdf?sequence=1&isAllowed=y.
  • [30] Xianbin Zhu, Wenjun Li, Yongjie Yang, and Jianxin Wang. Incremental algorithms for the maximum internal spanning tree problem. Science China Information Sciences, 64, May 2021. doi:10.1007/s11432-019-2630-2.