The Alternating Stock Size Problem and the Gasoline Puzzle

Given a set S of integers whose sum is zero, consider the problem of finding a permutation of these integers such that: (i) all prefixes of the ordering are non-negative, and (ii) the maximum value of a prefix sum is minimized. Kellerer et al. referred to this problem as the"Stock Size Problem"and showed that it can be approximated to within 3/2. They also showed that an approximation ratio of 2 can be achieved via several simple algorithms. We consider a related problem, which we call the"Alternating Stock Size Problem", where the number of positive and negative integers in the input set S are equal. The problem is the same as above, but we are additionally required to alternate the positive and negative numbers in the output ordering. This problem also has several simple 2-approximations. We show that it can be approximated to within 1.79. Then we show that this problem is closely related to an optimization version of the gasoline puzzle due to Lov\'asz, in which we want to minimize the size of the gas tank necessary to go around the track. We present a 2-approximation for this problem, using a natural linear programming relaxation whose feasible solutions are doubly stochastic matrices. Our novel rounding algorithm is based on a transformation that yields another doubly stochastic matrix with special properties, from which we can extract a suitable permutation.


Introduction
Suppose there is a set of jobs that can be processed in any order. Each job requires a specified amount of a particular resource, e.g. gasoline, which can be supplied in an amount chosen from a specified set of quantities. The limitation is that the storage space for this resource is bounded, so it must be replenished as it is used. The goal is to order the jobs and the replenishment amounts so that the required quantity of the resource is always available for the job being processed and so that the storage space is never exceeded.
More formally, we are given a set of integers Z = {z 1 , z 2 , . . . z n } whose sum is zero. For a permutation σ, a prefix sum is t i=1 z σ(i) for t ∈ [1, n]. Our goal is to find a permutation of the elements in Z such that (i) each prefix sum is non-negative, and (ii) the maximum prefix sum is minimized. This problem is known as the stock size problem. Kellerer, Kotov, Rendl and Woeginger presented a simple algorithm with a guarantee of µ x + µ y , where µ x is the largest number in Z, and µ y is the absolute value of the negative number with the largest absolute value in Z. (We sometimes use µ = max{µ x , µ y }.) Since both µ x and µ y are lower bounds on the value S * of an optimal solution, this shows that the problem can be approximated to within a factor of 2. Additionally, they presented algorithms with approximation guarantees of 8/5 and 3/2 [KKRW98].

The Alternating Stock Size Problem
In this paper we first consider a restricted version of the stock size problem in which we require that the positive and negative numbers in the output permutation alternate. We refer to this problem as the alternating stock size problem. A motivation for this problem is that we could schedule tasks in advance of knowing the input data. For example, suppose we want to stock and remove items from a warehouse and each task occupies a certain time slot. If we want to plan ahead, we may want to designate each slot as a stocking or a removing slot in advance, e.g. all odd time slots will be used for stocking and all even time slots for de-stocking. This could be beneficial in situations where some preparation is required for each type of time slot.
The input for our new problem is two sets of positive integers, X = {x 1 ≥ · · · ≥ x n } and Y = {y 1 ≥ · · · ≥ y n }, such that |X| = |Y |, and the two sets have equal sums. The elements of X represent the elements to be "added" and the elements of Y are those to be "removed". Note that, here, µ y = y 1 and µ x = x 1 . We now formally define the new problem.
Definition 1. The goal of the alternating stock size problem is to find permutations σ and ν such that Although this problem is a variant of the stock size problem, the algorithms found in [KKRW98] do not provide approximation guarantees since they do not necessarily produce feasible solutions for the alternating problem. Indeed, even the optimal solutions for these two problems on the same instance can differ greatly. The following example illustrates this. For this instance, the optimal value for the alternating problem approaches 2p, while it is p for the original stock size problem. Thus, this example exhibits a gap of two between the optimal solutions for the two problems.
We can show the following facts about the alternating problem. (i) There is always a feasible solution. (ii) The problem is NP-hard (as is the stock size problem). (iii) It is still the case that 2µ is an upper bound on the value of an optimal solution. Our main result for this problem is to give an algorithm with an approximation guarantee of 1.79 in Section 2.

Connections to the Gasoline Puzzle
The following well-known puzzle appears on page 31 in [Lov79]: Along a speed track there are some gas stations. The total amount of gasoline available in them is equal to what our car (which has a very large tank) needs for going around the track. Prove that there is a gas station such that if we start there with an empty tank, we shall be able to go around the track without running out of gasoline.
Suppose that the capacity of each gas station is represented by a positive integer and the distance of each road segment is represented by a negative integer. For simplicity, suppose that it takes one unit of gas to travel one unit of road. Then the assumption of the puzzle implies that the sum of the positive integers equals the absolute value of the sum of the negative integers. In fact, if we are allowed to permute the gas stations and the road segments (placing exactly one gas station between every pair of consecutive road segments), and our goal is to minimize the size of the gas tank required to go around the track (beginning from a feasible starting point), then this is exactly the alternating stock size problem.
This leads to the following natural problem: Suppose the road segments are fixed and we are only allowed to rearrange (i.e. permute) the gas stations. In other words, between each pair of consecutive road segments (represented by negative integers), there is a spot for exactly one gas station (represented by positive integers, the capacities), and we can choose which gas station to place in each spot. The goal is to minimize the size of the tank required to get around the track, assuming we can choose our starting gas station. What is the complexity of this problem?
We argue in Appendix A that this problem is NP-hard. Our algorithm for the alternating stock size problem specifically requires that there is flexibility in placing both the x-values and the y-values. Therefore, it does not appear to be applicable to this problem, where the x-values are pre-assigned to fixed positions. Let us now formally define the gasoline problem, which is the second problem we will consider in this paper.

The Gasoline Problem
As input, we are given the two sets of positive integers X = {x 1 , x 2 , . . . x n } and Y = {y 1 ≥ y 2 ≥ · · · ≥ y n }, where the x i 's are fixed in the given order and n i=1 x i = n i=1 y i . Our goal is to find a permutation π that minimizes the value of η: (1) Given a circle with n points labeled 1 through n, the interval [k, ℓ] denotes a consecutive subset of integers assigned to points k through ℓ. For example, [5, 8] = {5, 6, 7, 8}, and [n − 1, 3] = {n − 1, n, 1, 2, 3}. We will often use µ y to refer to y 1 , i.e. the maximum y-value, which is a lower bound on the optimal value of a solution.
Observe that in (1) we consider only intervals that contain one more x-value than y-value. One might argue that, in order to model our problem correctly, one also has to look at intervals that contain one more y-value than x-value. However, let I be such an interval and let I ′ = [1, n] \ I. Then the absolute value of the difference of the x-values and the y-values is the same in I and I ′ (with inverted signs) due to the assumption n i=1 x i = n i=1 y i . We can also write the constraint (1) as: where α ≤ 0, β ≥ 0 and η = β − α. This version is slightly more general since it encompasses the scenario where we would like to minimize β for some fixed value of α. (With these constraints, it is no longer required that the sum of the x i 's equals the sum of the y i 's.) What is the approximability of this problem? Getting a constant factor approximation appears to be a challenge since the following example shows that it is no longer the case that 2µ is an upper bound. Despite this, we show in Section 3 that there is in fact a 2-approximation algorithm for the gasoline problem.
Example showing unbounded gap between OP T and µ. Suppose X and Y each have the following n entries: In the example above, µ = 2. However, the optimal value is n/2.
The requirement that the x-and y-jobs alternate may seem to be somewhat artificial or restrictive. A natural generalization of the gasoline problem (which we will refer to as the generalized gasoline problem) is where the x-jobs can be assigned to a set of predetermined positions, which are not necessarily alternating. As in the gasoline problem, our goal is to assign the y-jobs to the remaining slots so as to minimize the difference between the maximum and the minimum prefix. There is a simple reduction from this seemingly more general problem to the gasoline problem. Let X = {x 1 , x 2 , . . . x nx } and Y = {y 1 ≥ y 2 ≥ · · · ≥ y ny } be the input, where the x-jobs are assigned to n x (arbitrary) slots. The remaining n y slots are for the y-jobs. To reduce to an instance of the gasoline problem (with alternation), we do the following. For each set of x-jobs assigned to adjacent slots, we add them up to form a single job in a single slot. For each pair of consecutive y-jobs, we place a new x-slot between them where the assigned x-job has value zero. Thus, we obtain an instance of the gasoline problem as originally defined in the beginning of this section.

The Slated Stock Size Problem
Our new algorithm, developed to solve the gasoline problem, can also be applied to a natural generalization of the alternating stock size problem, in which we relax the required alternation between the x-and y-jobs and consider a scenario in which each slot is labeled as an x-or a y-slot and can only accomodate a job of the designated type. In other words, in the solution, the x-jobs and y-jobs will follow some specified pattern that is not necessarily alternating. The goal is to find a feasible assignment of x-and y-jobs to x-and y-slots, respectively, that minimizes the difference between the prefixes with highest and lowest values. Since this is simply a generalization of the stock size problem with the additional condition that each slot is slated as an x-or a y-slot, we refer to this problem as the slated stock size problem.
Formally, we are given two sets of positive integers X = {x 1 ≥ x 2 ≥ · · · ≥ x nx } and Y = {y 1 ≥ y 2 ≥ · · · ≥ y ny }, and n = n x + n y slots, each designated as either an x-slot or a y-slot. Let I x and I y denote the indices of the x-and y-slots, respectively, and let P denote a prefix. Then, the objective is to find a permutation π that minimizes the value of β − α, where For this problem, we obtain an algorithm with approximation guarantee OP T + µ x + µ y . We are not aware of any other algorithms for this problem with a comparable approximation guarantee.

Related Work
The work most related to the alternating stock size problem is contained in the aforementioned paper by Kellerer et al. [KKRW98]. Earlier, Abdel-Wahab and Kameda studied a variant of the stock size problem in which the output sequence of the jobs is required to obey a given set of precedence constraints, but the stock size is also allowed to be negative. They gave a polynomialtime algorithm for the case when the precedence constraints are series parallel [AWK78]. The gasoline problem and its generalization are related to those found in a widely-studied research area known as resource constrained scheduling, where the goal is usually to minimize the completion time or to maximize the number of jobs completed in a given timeframe while subject to some limited resources [BLK83,CK82]. For example, in addition to time on a machine, a job could require a certain amount of another resource and would be eligible to be scheduled only if the inventory for this resource is sufficient. A general framework for these types of problems is called scheduling with non-renewable resources. Here, non-renewable means not abundantly available, but rather replenished according to some rules, such as periodically and in pre-determined increments (as in the gasoline problem), or in specified increments that can be scheduled by the user (as in the alternating stock size problem), or at some arbitrary fixed timepoints. Examples for scheduling problems in this framework are described by Briskorn et al., by Györgyi and Kis, and by Morsy and Pesch [BCL + 10, GK14, GK15,MP15]. While the admissibility of a schedule is affected by the availability of a resource (e.g. whether or not there is sufficient inventory), minimizing the inventory is not a main objective in these papers.
For example, suppose we are given a set of jobs to be scheduled on a single machine. Each job consumes some resource, and is only allowed to be scheduled at a timepoint if there is sufficient resource available for that job at this timepoint. Jobs may have different resource requirements. Periodically, at timepoints and in increments known in advance, the resource will be replenished. The goal is to minimize the completion time. If at some timepoint, there is insufficient inventory for any job to be scheduled, then no job can be run, leading to gaps in the schedule and ultimately a later completion time. This problem of minimizing the completion time is polynomial time solvable (sort the jobs according to resource requirement), but an optimal schedule may contain idle times.
Suppose that we have some investment amount α that we can add to the inventory in advance to ensure that there is always sufficient inventory to schedule some job, resulting in a schedule with no empty timeslots, i.e. the optimal completion time. There is a natural connection between this scenario and the gasoline problem: Let |α| in Equation (3) denote the available investment. For this investment, suppose we wish to minimize β, which is the maximum inventory, in order to complete the jobs in the optimal completion time. For any feasible α and β, our algorithm in Section 3 produces a schedule with the optimal completion time using inventory size at most β + µ.
There are other works that directly address the problem of minimizing the maximum or cumulative inventory. Monma considers a problem in which each job has a specified effect on the inventory level [Mon80]. Neumann and Schwindt consider a scheduling problem in which the inventory is subject to both upper and lower bounds [NS03]. However, to the best of our knowledge, our work is the first to give approximation algorithms for the problem of minimizing the maximum inventory for non-renewable resource scheduling with fixed replenishments.
The stock size problem is also closely related to the Steinitz problem, which is a well-known problem in discrepancy theory [Ban87]. Given a set of vectors v 1 , v 2 , . . . v n ∈ R d where ||v i || ≤ 1 for some fixed norm and n i=1 v i = 0, the Steinitz problem is to find a permutation of the vectors so that the norm of the sum of each prefix is bounded. There exists a permutation in which the norm of each prefix is at most d [GS80,Bár08]. It has been conjectured that this bound can be improved to O( √ d), but only O( √ d log 2.5 n) is known [HS14]. The stock size problem is the one-dimensional analogue of the Steinitz problem. The variants of the stock size problem that we introduce in this paper can be extended to higher dimensions.

Algorithms for the Alternating Stock Size Problem
The existence of a feasible solution for the alternating stock size problem follows from the solution for the gasoline puzzle. (See Figure 1 for more details.) Furthermore, the upper bound of 2µ is also tight for the alternating problem. If we modify the example given in [KKRW98], we have an example for the alternating problem with an optimal stock size of 2p − 3, while µ = p.
In this section, we will present algorithms for the alternating stock size problem. We will use the notion of a (q, T )-pair, which is a special case of a (q, T )-batch introduced and used by [KKRW98] for the stock size problem.

Definition 2. [KKRW98]
A pair of jobs {x, y}, for x ∈ X and y ∈ Y , is called a (q, T )-pair for positive reals T and q ≤ 1, if: The following lemma is a special case of Lemma 3 in [KKRW98], and the proofs are identical. We provide the proof in Appendix B for the sake of completeness.
Lemma 1. For positive T , q ≤ 1 and a set of jobs partitioned into (q, T )-pairs, we can find an alternating sequence of the jobs with maximum stock size less than (1 + q)T .

The Pairing Algorithm
We now consider the simple algorithm that pairs x-and y-jobs, and then applies Lemma 1 to sequence the pairs. Suppose that there is some specific pairing that matches each x i to some y j , and consider the difference x i − y j for each pair. Let α 1 ≥ ... ≥ α n 1 denote the positive differences, and let β 1 ≥ ... ≥ β n 2 denote the absolute values of the negative differences, where n 1 + n 2 = n.
Lemma 2. The matching M ⋆ that matches x i and y i for all i ∈ {1, . . . , n} minimizes both α 1 and β 1 .
Proof. Let M be an arbitrary matching that is different from M ⋆ . Then there exist edges (i 1 , j 1 ) ∈ M and (i 2 , j 2 ) ∈ M with i 1 > i 2 and j 1 < j 2 . We show that we can replace these edges by the edges (i 1 , j 2 ) and (i 2 , j 1 ) without increasing α 1 or β 1 . From this the theorem follows because after a finite number of such exchanges we obtain the matching M ⋆ . Since The aforementioned inequalities also imply Hence, neither α 1 nor β 1 can increase due to the exchange. The pairing given by M ⋆ directly results in a 2-approximation for the alternating stock size problem, by applying Lemma 1. Without loss of generality, let us assume that max{α 1 , β 1 } = α 1 , and observe that α 1 ≤ µ. Then M ⋆ partitions the input into (α 1 /µ, µ)-pairs. Applying Lemma 1, we obtain an algorithm that computes a solution with value at most µ + α 1 ≤ 2µ. We note that if α 1 ≤ (1 − ǫ)µ, then we have a (2 − ǫ)-approximation.

Lower Bound for the Alternating Stock Size Problem
In order to obtain an approximation ratio better than 2, we need to use a lower bound that is more accurate than µ. We now introduce a lower bound closely related to the one given for the stock size problem in [KKRW98] (Lemma 8). We refer to a real number C, which divides the sets X and Y into sets of small jobs and big jobs, as a barrier. Let C ≤ µ be a barrier such that: where, without loss of generality, n a ≥ n b . (If not then by swapping the x's and the y's we have a symmetric sequencing problem with n a ≤ n b ). The elements of (5) are all the x-jobs (partitioned into the sets A and V ) and the elements of (6) are all the y-jobs. The jobs in Y that have value at most C are partitioned into W ′ and W .
and W i each depend on C, but in order to avoid cumbersome notation, we do not use superscript C.) Let s ∈ {1, . . . , n a − n b }. After fixing a barrier C, let h be the (unique) index such that w h > v h and w h+1 ≤ v h+1 , and recall that S * is the value of an optimal ordering. Then we obtain the following lower bound on S * .
Lemma 3. For n a > n b , 1 ≤ s ≤ n a − n b , the following inequality holds: Proof. Following the proof of Lemma 8 in [KKRW98], consider the job sequence L * that is the optimal ordering restricted only to the jobs with value at least C. Then there are n a − n b jobs in A whose direct successor in L * is another job in A. (If the very last job in L * is in A, then we say that the first job in L * is its direct successor.) Consider such a job a i and its direct successor in L * , a j . Such a pair must be separated in S * by either a single job from W ′ ∪ W or by an alternating sequence of jobs from W ′ ∪ W and V . We refer to the spaces in S * between such pairs as slots.
Note that the values a i + a j plus the total value of the jobs in the corresponding slots is a lower bound on S * . Consider the pairs of successive a's in L * whose corresponding slots do not contain jobs from the set {w ′ 1 , . . . , w ′ s−1 }. There are at least (n a − n b − s + 1) of these pairs. Being pessimistic (we want to obtain a high lower bound, and this assumption may make it lower), we assume that these pairs involve the (n a − n b − s + 1) smallest values in A ′ . Moreover, in order to decrease the lower bound even more, it may be the case that all of the pairs in the set {(v i , w i )} for 1 ≤ i ≤ h are placed in some slots. Thus, we have the following inequality: which directly leads to our lower bound.

Alternating Batches: Definition
We need a few more tools before we can outline our new algorithm. The notion of batches introduced in [KKRW98], to which we briefly alluded before Lemma 1, is quite useful for the stock size problem. For B ⊆ X ∪ Y , let x(B) and y(B) denote the total value of the x-jobs and y-jobs, respectively, in B. In its original form, the batching lemma (Lemma 3, [KKRW98]) calls for a partition of the input into groups or batches such that for some fixed positive real numbers T and q ≤ 1, each group B has the following properties: x(B), y(B) ≤ T and |x(B) − y(B)| ≤ qT . Given such a partition of the input, a sequence with stock size at most (1 + q)T can be produced. This approach is not directly applicable to the alternating stock size problem, because the output is not necessarily an alternating sequence. However, we will now show that the procedure can be modified to yield a valid ordering. With this goal in mind, we define a new type of batch, which we call an alternating batch. An alternating batch will either contain two jobs (small) or more than two jobs (large).
The modified procedure to construct an ordering of the jobs first partitions the input into alternating batches, then orders these batches, and finally orders the jobs contained within each batch. In the case of a small alternating batch, the batch will contain both an x-job and a y-job, and the last step simply preserves this order. A large alternating batch will be required to fulfill certain additional properties that allow the elements to be sequenced in a way that is both alternating and feasible, i.e. all prefixes are nonnegative.
and consider the following four properties: Lemma 4. If a batch B satisfies properties (i), (ii), (iii) and (iv), then we can sequence the elements in B so that the items alternate, each prefix is non-negative and the maximum height (or prefix sum) of the sequence is x ′ 1 .
Proof. Place the items in the order: All prefixes of this sequence are nonnegative, because by (ii) and (iii) only the first pair may have a positive sum, and by (i) the sum of all the pairs is non-negative. Since shows that all prefix sums have value less than x ′ 1 .
Definition 4. We say that a (1−ǫ)-alternating batch with more than two jobs is a large alternating batch. In other words, a large alternating batch obeys conditions (1) and (2) in Definition 3. A small alternating batch contains only two jobs and obeys condition (1) in Definition 3.
Note that, by definition, in a large alternating batch B, the sum of the x-jobs in B is at least the sum of the y-jobs in B.
Lemma 5. If the sets X and Y can be partitioned into large and small (1 − ǫ)-alternating batches, then we can find an alternating sequence with maximum stock size less than (2 − ǫ)µ.
Proof. We will show that the proof of Lemma 3 in [KKRW98] can be modified to prove our lemma. (This proof is almost identical to that in [KKRW98], but since we need to make subtle changes, we include it in its entirety here for the sake of completeness.) Let us set q = (1 − ǫ) and T = µ. The only difference will be that inside the large alternating batches, we will not always sequence all of the x's before all of the y's, but we instead use the algorithm for sequencing an alternating batch that was given in Lemma 4.
We sort all of the q-alternating batches based on the value of x(B) − y(B) in nondecreasing order into a sequence B. Let us begin with the empty list L S and with the current stock size set to zero. We repeat the following step until B is empty: "Find the first batch B in B such that S + x(B) − y(B) ≥ 0 and set S := S + x(B) − y(B). Append B to L S and remove it from B." Afterward, we sequence each large alternating batch B according to Lemma 4, and each small alternating batch by simply placing the x-job before the y-job.
Since the sum of all x(B) − y(B) is zero, and the stock size never goes below zero, each time a batch with positive x(B)−y(B) is chosen, there exists at least one unsequenced batch with negative x(B)− y(B). To prove the upper bound (1+ q)T on the maximum, [KKRW98] introduce the notion of breakpoints, which fulfill the following two conditions: (a) At each breakpoint, the current stock size S is less than qT ; and (b) between any two consecutive breakpoints, S remains below (1 + q)T . Obviously, if the breakpoints cover the whole time period, this will prove the lemma.
The first break point is at time zero; the other breakpoints are the time points just before a batch B with positive x(B) − y(B) is started. The last breakpoint is defined to be just after the last batch.
The first breakpoint and the last one fulfill condition (a) by definition. If one of the other breakpoints would not fulfill condition (a), then S ≥ qT must hold, and because of property (1) in Definition 3, our algorithm would have chosen a batch B with negative x(B) − y(B) as the next batch. Thus, all of the breakpoints fulfill the condition that S < qT . Now we need to consider the values of S between two consecutive breakpoints. Let us consider two consecutive breakpoints BP i and BP i+1 . Recall that all batches B − with nonpositive x(B − ) − y(B − ) have only two jobs. Since, at time BP i , a batch B + with positive x(B + ) − y(B + ) is started, it follows that for each batch B − , the inequality holds. Otherwise, S i +x(B − )−y(B − ) ≥ 0, because y(B − ) is a single job and is therefore at most T . After batch B + is appended to L S , the current stock size increases to S i + x(B + ) − y(B + ) ≤ S i +qT . If batch B + contained only two jobs, then in between the stock size is at most S i +x(B + ) < qT + T . If B + is a large alternating batch, then by Lemma 4, the highest point after S i is at most S i + x ′ 1 < qT + T . Either the next batch again has positive x(B) − y(B) (then we have made it to the next breakpoint) or there follows a sequence of (small) batches with nonpositive x(B) − y(B). The stock size within any of these batches B − always remains below because of inequality (7) and because B + is a q-alternating batch. After each of these batches, the stock does not increase. This shows that condition (b) holds for any two consecutive breakpoints, and the proof of the lemma is complete.

Alternating Batches: Construction
In this section, we present the final tool required for our algorithm. Suppose that for some some ǫ : 0 ≤ ǫ ≤ 1, the following conditions hold for an input instance to the alternating stock size problem: Then, we claim, there is some value of ǫ (to be determined later) for which the above two conditions can be used to partition the input into (1 − ǫ)-alternating batches, to which we can then apply Lemma 5. In this section, we will heavily rely on the notation introduced in Section 2.2. The sets A ′ = {a ′ 1 , . . . , a ′ na−n b } and W ′ = {w ′ 1 , . . . , w ′ na−n b } contain exactly the pairs in M ⋆ that are split by barrier C. Let s be the smallest index such that w ′ s < ǫµ. To see that such an s actually exists, we note the following. Let i ⋆ denote the index such that x i ⋆ − y i ⋆ = α 1 . Then y i ⋆ < ǫµ and the pair (x i ⋆ , y i ⋆ ) is split by C. Thus, y i ⋆ corresponds to some w ′ i ′ , and therefore s ≤ i ′ . See Figure 2 for a schematic drawing.
For Figure 2: An illustration of the various elements used in the construction of the lower bound.
denote the pair {v j , w j }. Since w ′ s < ǫµ, it follows that all w i 's in W also have value less than ǫµ. Moreover, β ′ j < ǫµ for j ∈ {1, . . . , h}. Our goal is now to construct (1 − ǫ)-alternating batches.
The set A i therefore forms a small (1 − ǫ)-alternating batch. For each A i where i ∈ {s, . . . , n a − n b }, we will find a set of B j 's that can be grouped with this A i to create a large (1 − ǫ)-alternating batch. However, to do this, we require that the condition on ǫ found in Claim 1 be satisfied.
For ease of notation, we set d = n a − n b − s + 1. In the following lemma, we show that we can also construct a (1 − ǫ)-alternating batch for each A i for i ∈ [s, n a − n b ].
Proof. Our goal is to show that a set of B j 's can be assigned to each A i so that the total value of the corresponding α ′ i and β ′ j 's is at most (1 − ǫ)µ. This will imply that condition (1) in Definition 3 holds. Note that conditions (ii), (iii) and (iv) hold for any set S i ∪ A i+s−1 . We will also show that (i) holds for the batches we construct.
For p in {1, . . . , d}, let B p = {B 1 , . . . , B h } \ ∪ p−1 k=1 S k and let f (B p ) denote the sum of the weight of the elements that are in B p . As we construct the sets S i , we will show at each step p that the following hypothesis holds: 1)), and we have formed p − 1 sets S 1 , . . . , S p−1 such that each S k ∪ A k+s−1 is an (1 − ǫ)-alternating batch for k ∈ {1, . . . , p − 1}.
When p = 1, the hypothesis is given by Lemma 6. Now, let's assume that we have made p − 1 sets for p − 1 < d. If α ′ p+s−1 ≤ (1 − ǫ)µ, we set S p = ∅ and the required inequality still holds. Otherwise, let S p be a subset of B p such that ǫµ − w ′ s+p−1 ≤ f (S p ) < 2ǫµ − w ′ s+p−1 . Such a subset exists because all the elements of B p are at most ǫµ. Then . So the inequality holds at step p + 1 and we have made the set S p . This proves the lemma. Now we want to complete the construction of the (1 − ǫ)-alternating batches, so that we can apply Lemma 5. For the sets A i , where i ∈ {s, . . . , n a − n b }, we construct batches according to Lemma 7. Let y i * = w ′ s . For all i < i * , the pair (x i , y i ) form a small (1 − ǫ)-alternating batch. This follows from the fact that for all i < i * , y i * ≥ ǫµ, by definition of s. Finally, if there are remaining elements, they are v i 's and w i 's, which can be paired up arbitrarily to construct more small (1 − ǫ)-alternating batches, since each remaining v i has value stictly less than (1 − ǫ)µ due to our choice of barrier, and each remaining w i has value at most ǫµ.
Since the only limits on the value of ǫ are imposed by Lemma 6, we can set ǫ = .21 and partition the input into .79-alternating batches.

A 1.79-Approximation Algorithm
We are now ready to present an algorithm for the alternating stock size problem with an approximation guarantee of 1.79. Proof. In the first case, we have α 1 ≤ (1 − ǫ)µ. The algorithm described in Section 2.1 therefore gives a solution whose value is at at most µ + α 1 ≤ (2 − ǫ)µ, and we know that µ is a lower bound. In the second case, we have LB(C) ≥ 2µ/(2 − ǫ), in which case an algorithm with a guarantee of 2µ is a (2 − ǫ)-approximation. The last case is covered in the proof of Lemma 5.

Gasoline Problem
Let the variable z ij be 1 if gas station y i is placed in position j, and be 0 otherwise. Then we can formulate the gasoline problem as the following integer linear program whose solution matrix Z is a permutation matrix.
∀k ∈ {1, . . . , n} : Observe that (9) and (10) imply that for every interval I = [k, ℓ] the sum of the x i 's in I and the sum of the y i 's assigned to I by Z differ by at most β − α. If we replace z ij ∈ {0, 1} with the constraint z ij ∈ [0, 1], then the solution to the linear program, Z, is an n × n doubly stochastic matrix. Now we have the following rounding problem. We are given an n × n doubly stochastic matrix Z = {z ij } and we define z j to be the total fractional value of the y i 's that are in position j, i.e., z j = n i=1 z ij · y i . Our goal is to find a permutation of the y i 's such that the y i assigned to position j is roughly equal to z j .
A natural approach would be to decompose Z into a convex combination of permutation matrices and see if one of these gives a good permutation of the elements in Y . However, consider the following example. However, these permutations could have an interval with very large value, while the optimal permutation of the elements in Y is   {1, 1, . . . 1, B, 1 . . . , 1, B, 1, . . . 1}.

Transformation
Given a doubly stochastic matrix Z = {z ij }, we transform it into a doubly stochastic matrix T = {t ij } with special properties. First of all, for each j, z j = n i=1 t ij · y i . This means that if (Z, α, β) is a feasible solution to the linear program then (T, α, β) is also a feasible solution. In particular, if Z is an optimal solution, for which β − α is as small as possible, then T is also optimal.
We call a row i in a doubly stochastic matrix A = {a ij } finished at column ℓ if ℓ j=1 a ij = 1. We say that a matrix T has the consecutiveness property if the following holds: for each column j and any rows i 1 and i 3 with i 1 < i 3 , t i 1 j > 0, and t i 3 j > 0, each row i 2 ∈ {i 1 + 1, . . . , i 3 − 1} is finished at column j.
Our procedure to transform the matrix Z into a matrix T with the desired property relies on the following transformation rule. Assume that there exist indices j, i 1 , i 3 , and i 2 ∈ {i 1 + 1, . . . , i 3 − 1} such that z i 1 j > 0, z i 3 j > 0, and row i 2 is not finished in matrix Z at column j. Then the procedure shift shown as Algorithm 2 computes a column vector a = (a 1 , . . . , a n ), which satisfies the following lemma.
Lemma 8. For any δ ≥ 0, the vector a returned by shift(Z, j, i 1 , i 2 , i 3 , δ) satisfies n i=1 a i ·y i = z j .
Proof. Due to a i = z ij for all i ∈ {1, . . . , n} \ {i 1 , i 2 , i 3 }, it suffices to prove that a i 1 y i 1 + a i 2 y i 2 + a i 3 y i 3 = z i 1 j y i 1 + z i 2 j y i 2 + z i 3 j y i 3 .
Algorithm 2 shift(Z, j, i 1 , i 2 , i 3 , δ) 1: ∀i ∈ {1, . . . , n} \ {i 1 , i 2 , i 3 } : a i = z ij ; 2: a i 2 = z i 2 j + δ; 3: if y i 1 = y i 3 then 4: In the first case y i 1 = y i 3 , this follows easily because in this case y i 1 = y i 2 = y i 3 (remember that i 1 < i 2 < i 3 , which implies y i 1 ≥ y i 2 ≥ y i 3 ). In the second case y i 1 > y i 3 , we have Let Z ′ denote the matrix that we obtain from Z if we replace the j th column by the vector a returned by the procedure shift. The previous lemma shows that Z ′ satisfies (9) and (10) for the same β and α as Z because the value z j is not changed by the procedure. However, the matrix Z ′ is not doubly stochastic because the rows i 1 , i 2 , and i 3 do not add up to one anymore. In order to repair this, we have to apply the shift operation again to another column with −δ. Formally, let us redefine the matrix Z ′ = {z ′ ij } as the outcome of the operation transform shown as Algorithm 3.
Algorithm 3 transform(Z, j, i 1 , i 2 , i 3 ) 1: The j th column of Z ′ equals shift(Z, j, i 1 , i 2 , i 3 , δ) for δ > 0 to be chosen later. 2: Let j ′ > j denote the smallest index larger than j with z i 2 j ′ > 0. Such an index must exist because row i 2 is not finished in Z in column j. The (j ′ ) th column of Z ′ equals shift(Z, j ′ , i 1 , i 2 , i 3 , −δ). 3: All columns of Z and Z ′ , except for columns j and j ′ , coincide. 4: The value δ is chosen as the largest value for which all entries of Z ′ are in [0, 1]. This value must be strictly larger than 0 due to our choice of j, j ′ , i 1 , i 2 , and i 3 . 5: return Z ′ Observe that Z ′ is a doubly stochastic matrix because the rows i 1 , i 2 , and i 3 sum up to one and all entries are from [0, 1]. Applying Lemma 8 twice implies that (Z ′ , β, α) is a feasible solution to the linear program if (Z, β, α) is one.
We will transform Z by a finite number of applications of the operation transform. As long as the current matrix T (which is initially chosen as Z) does not have the consecutiveness property, let j be the smallest index for which there exist indices i 1 , i 3 , and i 2 ∈ {i 1 + 1, . . . , i 3 − 1} such that t i 1 j > 0, t i 3 j > 0, and row i 2 is not finished in T at column j. Furthermore, let i 1 and i 3 be the smallest and largest index with t i 1 j > 0 and t i 3 j > 0, respectively, and let i 2 be the smallest index from {i 1 + 1, . . . , i 3 − 1} for which row i 2 is not finished at column j. We apply the operation transform(T, j, i 1 , i 2 , i 3 ) to obtain a new matrix T .
Lemma 9. After at most a polynomial number of transform operations, no further such operation can be applied. Then T is a doubly stochastic matrix with the consecutiveness property.
Proof. If the transform operation is not applicable anymore, then by definition the current matrix T must satisfy the consecutiveness property. Hence, we only need to show that this is the case after at most a polynomial number of transform operations.
First of all observe that the smallest index j for which column j does not satisfy the consecutiveness property cannot decrease because transform does not change the columns 1, . . . , j − 1. Hence, we only need to argue that j increases after a polynomial number of transform operations. For this, observe that the smallest index i 1 with t i 1 j > 0 cannot decrease and that the largest index i 3 with t i 3 j > 0 cannot increase because the transform operation only increases t i 2 j for some i 2 with i 1 < i 2 < i 3 . Hence, again it is sufficient to prove that either i 1 increases or i 3 decreases after a polynomial number of steps. This follows from the fact that as long j, i 1 , and i 3 do not change, i 2 cannot decrease. Furthermore, as long as j, i 1 , i 2 , and i 3 do not change, the index j ′ increases with every transform operation. Hence, after at most n steps i 2 has to increase, which implies that after at most n 2 steps i 1 has to increase or i 3 has to decrease.
In the remainder, we will not need the matrix Z anymore but only matrix T . For convenience, we will use the notation t j = n i=1 t ij · y i instead of z j even though the transformation ensures that t j and z j coincide.
We now define a graph whose connected components or blocks will correspond to the row indices from columns that overlap. More formally, let V = {1, . . . , n} denote a set of vertices and let G 0 be the empty graph on V . Each column j of T defines a set E j of edges as follows: the set E j is a clique on the vertices i ∈ V with t ij > 0, i.e., E j contains an edge between two vertices i and i ′ if and only if t ij > 0 and t i ′ j > 0. We denote by G j the graph on V with edge set E 1 ∪ . . . ∪ E j . If B ⊆ {1, . . . , n} is a block in G j with i ∈ B then we will say that block B contains row i. For the following lemma it is convenient to define a matrix C = {c ij }, which is the cumulative version of T . To be more precise, the j th column of C equals the sum of the first j columns of T . Proof. We prove the lemma by induction on j. Let us first consider the base case j = 1. The consecutiveness property of T guarantees that the first column of C (which equals the first column of T ) contains at most two strictly positive entries. Let B denote the block that corresponds to these entries. The value of this block is one because the sum of all entries of the first column equals one. If |B| = 1 then B is finished because if T contains only one positive entry in the first column, then this entry must be one. If |B| = 2 then B is unfinished because neither of its rows is finished. In both cases the first statement of the lemma is true for block B. All rows that have a zero in the first column form an unfinished block of their own with value zero. Also for these blocks the first statement is correct. The second statement is also correct because if |B| = 1 then the only difference between the blocks of G 0 and G 1 is that block B becomes finished, and if |B| = 2 then two unfinished blocks of G 0 are merged. The correctness of the third statement follows from the fact that in the case |B| = 2 the two entries of B are consecutive due to the consecutiveness property of T . Now we come to the inductive step and assume that the statement is correct for the blocks of G j−1 . Let I ⊆ [1, n] denote the set of indices i for which t ij > 0. Observe that I can only be non-disjoint from unfinished blocks of G j−1 . Due to the definition of G j only blocks that are non-disjoint from I change from G j−1 to G j . Hence, the correctness of the first statement for all blocks of G j that are disjoint from I follows from the induction hypothesis. If I is non-disjoint only from a single block B of G j−1 then this block will become finished. This follows from the fact that B has value |B| − 1 in G j−1 and that a total value of one is added to B because T is a doubly stochastic matrix. Hence, in this case G j−1 and G j define the same set of blocks and the only difference is that B is unfinished in G j−1 and finished in G j . Then the correctness of all three statements follows from the induction hypothesis.
It remains to consider the case that I is non-disjoint from at least two blocks of G j−1 . First we observe that I can be non-disjoint from at most two blocks of G j−1 . Assume for contradiction that I is non-disjoint from three different blocks B 1 , B 2 , and B 3 . Due to the third property, the induction hypothesis implies that one of these blocks must be entirely between the two others. Let B 2 be this block. Since I is non-disjoint from B 1 and B 3 , there are two indices i 1 and i 3 with t i 1 j > 0 and t i 3 j > 0 and i 1 < i 2 < i 3 for all i 2 ∈ B 2 . Due to the consecutiveness property of T , this is only possible if all rows that belong to B 2 are finished at column j. Due the induction hypothesis, the value of B 2 at column j − 1 is |B 2 | − 1. Hence, in order to finish all rows that belong to B 2 one has to add a value of exactly one to B 2 in column j. Since column j of T sums to one, this implies that there cannot be an index i / ∈ B 2 with t ij > 0 contradicting the choise of i 1 and i 3 . This implies the correctness of the second property.
Hence, we only need to consider the case that I is non-disjoint from exactly two blocks B 1 and B 2 of G j−1 . Due to the induction hypothesis the values of these blocks at column j − 1 are |B 1 | − 1 and |B 2 |− 1, respectively. Since column j of T has a sum of one, the value of the block B in G j that emerges from merging B 1 and B 2 has a value of (|B 1 |− 1)+ (|B 2 |− 1)+ 1 = |B 1 |+ |B 2 |− 1 = |B|− 1.
This proves the first property. To prove the third property, we use the fact that the consecutiveness property of T guarantees that there cannot be an unfinished block between B 1 and B 2 in G j−1 . Hence, we can associate with B the smallest interval that contains the intervals I 1 and I 2 that were associated with B 1 and B 2 in G j−1 . This also proves the third property.
One might ask if the consecutiveness property is satisfied by every optimal extreme point of the linear program. Let us mention that this is not the case. A simple counterexample is provided by the instance X = {5, 5, 5, 5} and Y = {9, 6, 4, 1}. In this instance, an optimal extreme point would be, for example, to take one half of each of the items y 1 and y 4 in steps one and three and to take one half of each of the items y 2 and y 3 in steps two and four. This extreme point does however not satisfy the consecutiveness property. Hence, the transformation described in this section is necessary.

Rounding
In this section, we use the transformed matrix T to create the solution matrix R, which is a doubly stochastic 0/1 matrix, i.e., a permutation matrix. We apply the following rounding method.
1: for j = 1 to n do 2: Let B denote the active block in G j , i.e., the block that contains the rows i with t ij > 0.

3:
Let p denote the smallest index in B such that r pi = 0 for all i < j.

4:
Set r pj = 1 and r qj = 0 for all q = p.

5: end for
Observe that the first step is well-defined because all non-zero entries in column j belong by definition to the same block of G j . The resulting matrix R will be doubly stochastic, since each column contains a single one, as does each row. We just need to prove that in Line 3 there always exists a row p ∈ B that is unfinished in R at column j − 1. This follows from the first part of the next lemma because, due to Lemma 10, the active block B in G j emerges from one or two unfinished blocks in G j−1 and these blocks each contain a row that is unfinished in R at column j − 1.
Lemma 11. Let B be a block in G j for some j ∈ {1, . . . , n}.
1. If B is an unfinished block in G j and p is the largest index in B, then r pi = 0 for all i ≤ j and all rows corresponding to B \ {p} are finished in R at column j.
2. If B is a finished block in G j , then for all q ∈ B, row q is finished in R at column j.
Proof. We will prove the lemma by induction on j. Let us first consider the base case j = 1. The consecutiveness property of T guarantees that the first column of T contains at most two strictly positive entries. Let B denote the block that corresponds to these entries. If |B| = 1 then B = {p} is finished in T at column 1 and the rounding will set r p1 = 1. If |B| = 2 then B = {p, q} is unfinished and the rounding will set r p1 = 1 if p < q. In both cases the statement of the lemma is correct for B. All other blocks in G 1 are unfinished singleton blocks, for which the lemma is also true. Now let us assume that the lemma is true for j−1 and prove it for j. By property 2 of Lemma 10, the blocks in G j emerge from the blocks in G j−1 either by merging exactly two unfinished blocks or making one unfinished block finished. In the former case, suppose we merge two blocks B 1 and B 2 . Let ℓ 1 and ℓ 2 denote the largest indices in B 1 and B 2 , respectively, and assume that ℓ 1 < ℓ 2 . By assumption, we have that r ℓ 1 i = r ℓ 2 i = 0 for all i ≤ j − 1. Thus, we can set r ℓ 1 j = 1, and the first statement will still hold for the new unfinished block in G j . In the latter case, suppose that B is an unfinished block in G j−1 that becomes finished in G j and that ℓ is the largest label in B. Then by assumption, r ℓi = 0 for all i ≤ j − 1, so we can set r ℓj = 1 and statement (ii) holds.
We define the value of a permutation matrix M to be the smallest γ for which there exist α ′ and β ′ with γ = β ′ − α ′ such that (M, α ′ , β ′ ) is a feasible solution to the linear program.
Theorem 2. Let (T, α, β) be an optimal solution to the linear program. Then (R, α, β + µ y ) is a feasible solution to the linear program. Hence, the value of the matrix R is at most (β − α) + µ y ≤ 2 · OPT, where OPT denotes the value of the optimal permutation matrix.
For ease of notation, we define r j as follows: r j = n i=1 r ij · y i . Note that r j corresponds to the value of the element from Y that the algorithm places in position j. We will see later that Theorem 2 follows easily from the next lemma.
We need the following lemma in the proof of Lemma 12.
Lemma 13. Let b be the largest index in an unfinished block B in G j . Then, Proof. Let the value of the unfinished block B be k = i∈B c ij . By property 1 of Lemma 10, block B consists of k + 1 rows. Thus, we have: Proof of Lemma 12. Let us consider the sets of finished and unfinished blocks in G k , B F and B U , respectively. For a block B ∈ B F ∪ B U , we denote by its rounding error. Since each row is contained in exactly one block of G k , Hence, in order to prove the lemma, it suffices to bound the rounding errors of the blocks. If block B is finished in G k , then all rows that belong to B are finished in T and in R (due to property 2 of Lemma 11) at column k. Hence, Now consider an unfinished block B in G k , and let a and b denote the smallest and largest index in B, respectively. By Lemma 11, all rows in the block except for b are finished in R at column k (i.e., k j=1 r ij = 1 for i ∈ B \ {b} and k j=1 r bj = 0). The rounding error of B can thus be bounded as follows (remember that c ik = k j=1 t ij ): Equations (14) and (16) follow from Lemma 13. Inequality (17) follows from the fact that c bj ≤ 1. Inequality (15) follows from the facts that 1 − c ik ≥ 0 and y i − y b ≥ 0 for all i ∈ B. These facts also imply that er k (B) ≥ 0. Hence, Together (12) and (13) imply Now, let B 1 , . . . B h denote the unfinished blocks in G k , and for each block B f in B U , let a f and b f denote the minimum and maximum indices, respectively, contained in the block. Property 3 of Lemma 10 implies that the intervals [a f , b f ] are pairwise disjoint. Hence, (18) implies Together with (19) this implies the lemma. Now we are ready to prove Theorem 2.
Proof of Theorem 2. Let (T, α, β) denote an optimal solution to the linear program. By definition, our rounding method produces a permutation matrix R. Lemma 12 implies that (R, α, β + µ y ) is also a feasible solution to the linear program because for each k ∈ {1, . . . , n}, Now the theorem follows because OPT ≥ µ y and OPT ≥ β − α.

LP Rounding for the Slated Stock Size Problem
We show that Theorem 2 can also be applied to the slated stock size problem, defined in Section 1.4. Let X = {x 1 ≥ . . . ≥ x nx } and Y = {y 1 ≥ . . . ≥ y ny } be an input for the slated stock size problem, and let µ x = x 1 , µ y = y 1 . Recall that in this problem, arbitrary disjoint subsets of n x and n y slots are slated for x-and y-jobs, respectively. Let η denote the optimal value for the relaxation of the following integer program: min β − α ∀j ∈ I x : i∈Ix z ij = 1, ∀i ∈ I x : j∈Ix z ij = 1, ∀i, j ∈ I x : z ij ∈ {0, 1}, ∀j ∈ I y : i∈Iy z ij = 1, ∀i ∈ I y : j∈Iy z ij = 1, ∀i, j ∈ I y : z ij ∈ {0, 1}, ∀k ∈ {1, . . . , n} : α ≤ Consider the generalized gasoline problem with inputx j = nx i=1 z ij · x i for the j th x-slot and y 1 , . . . , y ny to be assigned to the y-slots. The optimal fractional solution for this instance still has value η. Hence, Theorem 2 implies that we obtain a permutation π of the items y 1 , . . . , y n with value at most η + µ y . Now we change the roles of x and y and consider the generalized gasoline problem with input y π(1) , . . . , y π(ny) (these are the fixed items in the y-slots) and x 1 , . . . , x nx (these items are to be permuted). The optimal fractional solution for this instance has value at most η+µ y . Hence, Theorem 2 implies that we obtain a permutation σ of the items x 1 , . . . , x nx with value at most η + µ y + µ x . The permutations π and σ together form a solution for the slated stock size problem with value at most η + µ y + µ x ≤ 3 OP T .

Conclusions
We have introduced two new variants of the stock size problem and have presented non-trivial approximation algorithms for them. The most intriguing question for our variants as well as for the original stock size problem is if the approximation guarantees can be improved. Each of these problems is NP-hard but no APX-hardness is known. So it is conceivable that there exists a PTAS. Closing this gap seems very challenging.
We note that the additive integrality gap of the linear program in Section 3 can be arbitrarily close to µ y . Consider the following instance: Then the value of the linear program is x. However, the optimal value is µ, which can be arbitrarily larger than x.
From this proof, we can see that the Gasoline Problem is also NP-hard. This follows from the fact that, in the reduction for the alternating stock size problem, all of the x-values are set equal to one. Thus, they can simply be fixed in advance. Then, the only decisions required in the problem produced in the reduction involve placing the y-values. More specifically, one can see that the NP-hardness proof also shows that the problem of placing the y-values so as to minimize the difference between computing the highest point and the lowest point is NP-hard.

B Miscellaneous Proofs
Lemma 1. For a pair B = {x, y} ∈ X × Y , let x(B) and y(B) denote the values of the x-and y-jobs, respectively. We will sometimes refer to a pair {x, y} as positive or negative which describes the value of x − y.
Partition the pairs into two sets B + and B − , where the first set contains all of the positive pairs, and the second set contains all of the negative pairs. (We can assume there are no pairs for which x − y = 0, since these can simply be sequenced first.) Begin with an empty list L S and with the current stock size S set to zero. We then repeat the following step until the sets B + and B − are both empty: "Find any pair {x, y} in B − such that S + x − y ≥ 0. If no such pair exists, choose a pair {x, y} from B + . Set S := S + x − y. Append x and then y to L S and remove the pair from B − or from B + ." Since the sum of all the pairs is zero, and the stock size never goes below zero, each time a positive pair is appended to the list, there exists at least one negative pair in B − . To prove the upper bound (1 + q)T on the maximum stock size, we will introduce so-called breakpoints which fulfill the following two conditions: (a) at each breakpoint, the current stock size S is less than qT ; and (b) between any two consecutive breakpoints, S remains below (1 + q)T .
The first breakpoint is at time zero; the other breakpoints are the time points just before a positive pair is sequenced. The last breakpoint is defined to be just after the last pair is sequenced. The first breakpoint and the last one fulfill condition (a) by definition. If one of the other breakpoints would not fulfill condition (a), then S ≥ qT must hold, and because of property (ii) our algorithm would have chosen a negative pair to be sequenced.
Next we consider two consecutive breakpoints BP i and BP i+1 , and we let S i < qT denote the stock size at time BP i . Since at time BP i , a positive pair is sequenced, for all negative pairs B − , the inequality S i + x(B − ) < T, is true. Otherwise, a negative pair B − could have been sequenced next. After positive pair B + is appended to L S , the current stock size increases to S i + x(B + ) − y(B + ) ≤ S i + qT . Clearly S i + x(B + ) ≤ qT + T . Either the next pair is positive (and we are again at a breakpoint) or there is a sequence of negative pairs. The stock size during any of these pairs always remains below S i + x(B + ) − y(B + ) + x(B − ) < T + qT . After each of these pairs, the stock size does not increase. This shows that condition (b) holds for any two consecutive breakpoints and completes the proof.