Efficient Approximation Schemes for Scheduling on a Stochastic Number of Machines
Abstract
We study three two-stage optimization problems with a similar structure and different objectives. In the first stage of each problem, the goal is to assign input jobs of positive sizes to unsplittable bags. After this assignment is decided, the realization of the number of identical machines that will be available is revealed. Then, in the second stage, the bags are assigned to machines. The probability vector of the number of machines in the second stage is known to the algorithm as part of the input before making the decisions of the first stage. Thus, the vector of machine completion times is a random variable. The goal of the first problem is to minimize the expected value of the makespan of the second stage schedule, while the goal of the second problem is to maximize the expected value of the minimum completion time of the machines in the second stage solution. The goal of the third problem is to minimize the norm for a fixed , where the norm is applied on machines’ completion times vectors. Each one of the first two problems admits a PTAS as Buchem et al. showed recently. Here we significantly improve all their results by designing an EPTAS for each one of these problems. We also design an EPTAS for norm minimization for any .
Keywords and phrases:
Approximation algorithms, Approximation schemes, Two-stage stochastic optimization problems, Multiprocessor schedulingFunding:
Asaf Levin: Partially supported by ISF – Israel Science Foundation grant number 1467/22.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Scheduling algorithmsEditors:
Olaf Beyersdorff, Michał Pilipczuk, Elaine Pimentel, and Nguyễn Kim ThắngSeries and Publisher:

1 Introduction
We consider scheduling problems where the goal is to assign jobs non-preemptively to a set of identical machines. Unlike traditional scheduling problems [22], the number of identical machines available to the scheduler, denoted as , is not a part of the input, but it is drawn from a known probability distribution on the set of integers for an integer that is a part of the input, and it is given together with the probabilities. As a first stage, the decision maker completes an initial set of decisions, namely it assigns the jobs to bags, forming a partition of all input jobs. Later, once the realization of the number of machines becomes known, it packs the bags to the machines, where the packing of a bag to a machine means that all its jobs are scheduled together to this machine. That is, in this later step, every pair of jobs that were assigned to a common bag will be assigned to a common machine, and the decision of the first stage (that is, the partition of input jobs into bags) is irrecoverable. This two-stage stochastic scheduling problem was recently introduced by Buchem et al. [9].
Formally, the input consists of a set of jobs where each job has a positive rational size associated with it. We are given an integer and let denote the set of bags. We are also given a probability measure over , where is a rational non-negative number denoting the probability that the number of identical machines in the resulting second stage instance is . Here, we will use the property that . In the first stage of our problem, the jobs are split into bags by an algorithm. Namely, a feasible solution of the first stage is a function . In the second stage, after the value of has been revealed, the bags are assigned to machines, such that all jobs of each bag are assigned together. Namely, the algorithm also computes assignment functions defined for every realization in the support of of the bags to the set of integers denoting the indexes of machines in the instance of the second stage problem (corresponding to the realization of that is equal to ). So if the realization of is , a job will be assigned to the machine of index . The first function maps jobs to bags, and the second function which is based on the value of , , maps bags to machines. The number of bags is (and is independent of , so the partition into bags does not depend on which is not known yet at the time of assignment into bags), and the number of machines is , so for every realization of the schedule of the jobs to the machines is a feasible (non-preemptive) schedule for identical machines.
We use the terminology of such scheduling problems and let be the work (also called load) of machine which is the total size of jobs assigned to machine in a schedule. This is also the completion time of a unit speed machine processing continuously the set of jobs assigned to machine starting at time (in some order). The makespan of the schedule in realization is the maximum work of a machine in the second stage solution in the realization of , and the Santa Claus value of the schedule of realization is the minimum work of a machine (among the machines) in the second stage solution in the realization of .
The expected value of the makespan is the expected value of the random variable of the makespan of the schedule in realization . The expectation is computed based on all possible values of . The first problem that we consider is the problem of minimizing the expected value of this random variable. We denote it as Pr-makespan and refer to it as the makespan minimization problem. The expected value of the Santa Claus value is the expected value of the random variable of the Santa Claus value of the schedule in realization . The second problem we consider is the problem of maximizing the expected value of this random variable. We denote it as Pr-SantaClaus and refer to it as the Santa Claus maximization problem. While the term makespan is used in the scheduling literature in this meaning the term Santa Claus is not traditional, and it is usually referred to as the minimum work of a machine in the schedule. Given the length of the resulting terminology for our settings, we prefer to use the non-traditional terminology of Santa Claus value. The expected value of the random variable is called cost or value for minimization problems, and it is called profit or value for maximization problems. In the norm minimization problem, the objective for machines is . The cost of a solution of the third problem is the expected cost based on the random variable. This last problem is denoted by Pr-norm.
Since the stochastic nature of the problem results only from the different options for (and once is known, the problem is deterministic), we say that if the realization of is , then this is the scenario of index or simply scenario . We let denote the objective function value of the second stage problem in scenario for an optimal solution for the studied problem, and let denote the value of an optimal solution to our problem. An optimal solution needs to balance all scenarios, and we denote this optimal solution by opt as well (that is, we use the same notation as the cost or the value). We stress that is not necessarily the optimal objective function value for the input jobs and machines, since the solution with a fixed set of bags may be inferior. We will assume that is a positive integer such that (this can be done without loss of generality as one can first decrease to the minimum between its original value and , and then it is always possible to decrease the value of by a factor below to satisfy the integrality condition on ). The assumptions on will be used for all schemes (and in particular we will use ). Let .
An overview of our results.
Here, we study both minimization problems and a maximization problem. All types of algorithms defined here are required to have polynomial running times (and our algorithms will in fact have strongly polynomial running times). For a minimization problem, an -approximation algorithm is an algorithm that always finds a solution that is feasible and has a cost at most times the cost of an optimal solution. For a maximization problem, an -approximation algorithm is an algorithm that always finds a feasible solution of value at least times the value of an optimal solution. The approximation ratio of a given algorithm is the infimum value of the parameter such that this algorithm is an -approximation algorithm.
A PTAS is a class of approximation algorithms such that for any the class has a -approximation algorithm. An efficient polynomial time approximation scheme (EPTAS) is a stronger concept. This is a PTAS whose running time is of the form where is some (computable but not necessarily polynomial) function and is a polynomial function of the length of the (binary) encoding of the input. We will justify the assumption later, and therefore the running time can be rewritten in this form when the polynomial is on and . A fully polynomial time approximation scheme (FPTAS) is an even stronger concept, defined like an EPTAS, but the function must be a polynomial in . In this paper, we are interested in EPTAS’s.
A PTAS may have time complexity of the form , where can be any function of and even a power tower function. However, an EPTAS cannot have such a running time, which makes it more efficient. The concept of an EPTAS is related to fixed parameter tractable (FPT) algorithms [16].
In this paper, we focus on finding EPTAS’s for the three objectives. We develop an EPTAS for the makespan minimization problem in Section 2. Then, in Section 3 we establish an EPTAS for the Santa Claus maximization problem Pr-SantaClaus. Last, in the full version of this work we modify our scheme for Pr-makespan in order to obtain an EPTAS for Pr-norm. Due to space limitations, some of the proofs of the first two results are given in the full version, and the last EPTAS for Pr-norm is presented in the full version as well. In the recent work [9], a PTAS was designed for Pr-makespan, and another PTAS was designed for Pr-SantaClaus. Those PTAS’s are of forms that do not allow a simple modification of the PTAS into an EPTAS, and in particular for Pr-SantaClaus a dynamic program over a state space whose size is exponential is used (a function of appears in the exponent of the input size), and for both schemes enumeration steps of such sizes are applied.
For all objectives, we assume that holds. If , this means that every job can be assigned to its own bag. For each scenario, it is possible to apply an EPTAS for the corresponding problem (see for example [2], the details for previous work are discussed further below) and thereby get an EPTAS for the required problem.
The problems studied here generalize strongly NP-hard problems, and therefore one cannot expect to obtain an FPTAS (unless ), and thus our results are the best possible. Specifically, the special case of each one of the three problems where the number of machines is known, i.e., the probability function has a single value that is equal to , and other probabilities are equal to zero, is known to be NP-hard in the strong sense. The three objectives studied here are the three main objectives previously studied for scheduling on identical machines.
Related work.
Stein and Zhong [33] introduced the scheduling problem with an unknown number of machines but in their model the number of machines is selected later by an adversary. The problem was mostly studied with respect to makespan minimization. In the deterministic variant [33], the goal is to assign a set of jobs with known properties to identical machines. However, only an upper bound on the number of machines is given. Jobs have to be partitioned into subsets, such that every subset will act as an inseparable bag. Then, the number of machines (where ) becomes known, and the schedule is created using the bags, without unpacking them, as in the problem that we study here, and after the number of machines is revealed, the bags are assigned to the machines, as in the problem that we study. Thus, this problem also has two steps or levels of computation, but the worst case out of all possible values of is analyzed, where the comparison for each value of is to an optimal offline solution for identical machines and arbitrary bags. It is not hard to see that a constant approximation ratio (of at most ) can be obtained using a round-robin approach even for jobs of arbitrary sizes (via pre-sorting). An improved upper bound of was shown using a much more careful assignment to bags [33].
A variant where machines have speeds (similar to uniformly related machines) was defined and studied by [18] with respect to makespan. In that version, the number of machines is known, but not their speeds. The number of required bags is equal to the number of machines, but some machines may have speed zero, so the case of identical machines and the makespan objective is seen as binary speeds in this work. There are several input types of interest, which are arbitrary jobs, unit size jobs, sand (one large job that can be cut into any required parts), and pebbles (jobs that have arbitrary sizes, but they are relatively small) [18, 30]. Tight bounds were proved for the case of sand and makespan with and without speeds [33, 18], and for the Santa Claus objective without speeds [33]. All these values are strictly smaller than . For sand, since any partition of the input jobs is allowed, linear programming can be used to find the best partition. In the case with speeds for arbitrary sizes of jobs the algorithm of [18] has an approximation ratio of at most , while special cases allow smaller ratios [18, 30].
In [5], Balkanski et al. relate the problem of scheduling with an unknown number of machines and an unknown set of speeds to online algorithms with predictions. In online problems with predictions [31], the algorithm receives information on the input, where such information could have been computed via machine learning. This model takes into account both the situation where the input matches the prediction exactly and the case where it does not. It is required for the algorithm to have a relatively good performance for every input, but the performance has to be better if the input was predicted correctly. In many algorithms the performance improves as the input gets closer to the predicted input. The work of [5] provides results of this flavor.
Optimizing the worst scenario is pessimistic and does not allow one to take information learned from previous data into account, while the study of the expected value as in our stochastic problem allows us to prefer the typical scenarios in the bag assignment algorithm. See also [4, 8, 32, 17] for related work.
As we mentioned, the most relevant work to our work on the stochastic problem is [9], where two PTAS’s are provided. It is also mentioned in that work that FPTAS’s for the case where is seen as a constant can be designed using standard methods. The problem is called stochastic due to the probability distribution on scenarios. However, since this distribution is simply a convex combination of the costs for different scenarios, the methods are related to those often used for deterministic algorithms and worst case analysis. The convex combination complicates the problem and it requires carefully tailored methods for the algorithmic design. We follow the standard worst case studies of two-stage stochastic optimization problems, and we study the optimization problems of optimizing the expected value of the random variable.
All three objectives were studied initially for known numbers of identical machines, for which simple greedy approximation algorithms were presented first. Some time later PTAS’s were designed. Finally EPTAS’s were designed or it was observed that one of the known PTAS’s is an EPTAS or can be converted into an EPTAS, which is the best possible result for each one of the three cases unless . We provide additional details for each one of the objectives. The makespan objective was introduced by Graham [22, 23], where greedy algorithms were studied (see also [13, 20]). Twenty years later a PTAS was designed [25]. Hochbaum [24] mentions that one of the approaches of assigning large jobs of rounded sizes (namely, solving an integer linear program in fixed dimension) gives an EPTAS, and attributes this approach to D. Shmoys (see [26, 11] for recent results). The Santa Claus objective (where the name was introduced much later [7]) was studied with respect to greedy heuristics [21, 15, 14]. A PTAS which is actually an EPTAS was designed by Woeginger [34]. The norm objective function was studied with respect to greedy heuristics [10, 12, 29], and a PTAS and an EPTAS were designed [1]. The same authors generalized their results for other objectives that satisfy natural conditions [2]. The norm of a vector is seen as a more suitable measure of fairness of a schedule compared to bottleneck measures [3, 6], and therefore we study this objective additionally to those studied in [9].
Our methods.
We develop new techniques to address the uncertainty in the problems. We expect these techniques to be used in later work on approximating related variants.
We use rounding and discretization as in other work on scheduling, though we apply it not only on job sizes but also on bag sizes. The difficulty in relating new sizes to optimal solutions lies in the difference in objectives, since here the optimal value is not defined by a single schedule. However, we are still able to find such a relation using enumeration of sets of values, that is, via guessing. Since jobs are assigned to bags, we introduce configuration integer programs (IP’s) on collections of bags, which can be seen as a generalization of previously known approaches. Our configuration IP’s are based on the use of templates, where a template defines the contents of a bag, and configurations, where a configuration is an allocation of bags to one machine in a given scenario. We solve this IP using an algorithm for solving an IP in fixed dimension. All earlier steps of our schemes are highly motivated given the goal of creating such an IP and ensuring that it has a fixed dimension.
Machine numbers are split to intervals such that the most difficult interval for each guessed information is solved using an IP, while the others are solved using greedy approaches. The number of intervals for scenarios differs based on the specific problem.
While the sketched approach allows us to design an EPTAS for makespan minimization, the other objectives are harder to approximate. In order to tackle them, we develop an important tool called approximated histograms. Those are histograms of solution costs or values, and we use them in order to reduce the number of relevant scenarios to a constant. The condition for using the approximated histogram is a monotonicity assumption regarding the feasibility of solutions for different scenarios and the monotonicity of the costs for a given solution among scenarios in which it is feasible.
2 An EPTAS for the makespan minimization problem
We start with the makespan minimization problem, and continue to the other objectives in other sections. The first step will be to apply a discretization on bag sizes, and bound (from above) the set of relevant bag sizes as a function of opt, where we enumerate possible values of opt and use rounding for this value as well. Our second step is to round the instance of jobs (which will be assigned to bags). We use standard guessing steps and rounding methods for these steps. Then, we are going to approximate the rounded instance by using templates of the assignment of jobs to bags, and configurations of assigning templates to machines in every realization of . This will allow us to formulate an integer linear program that has a special structure, namely, it is a two-stage stochastic integer linear program. This special structure, as well as trivial bounds on the parameters of this formulation, provides us with an FPT algorithm to solve the problem. This algorithm has a running time dominated by a function of times a polynomial in , and therefore it is an EPTAS.
We will be able to reduce the number of variables such that we can apply Lenstra’s algorithm [28, 27] on the integer linear program, and the running time for solving this program will be smaller than that of solving a two-stage stochastic integer linear program. This is the case also for the other two EPTAS’s that we show, that is, we will apply Lenstra’s algorithm for all objectives. The most difficult step in the current section is to reduce the number of different values of makespan for different scenarios. Without this step, the running time will not satisfy the requirements of an EPTAS. The reduction is based on linear grouping, which is typically used for bin packing [19] and not for scheduling. The other EPTAS’s that we design require additional non-trivial steps, and we will discuss them later.
The optimal solution of this integer program is transformed into a set of decisions defining the functions and . Next, we present the details of the scheme together with its analysis. In what follows we will present steps where in each of those steps the resulting approximation ratio of the scheme increases by at most a multiplicative factor of . By scaling prior to applying the scheme, we get that the approximation ratio of the entire scheme will be at most . We will use the next lemma for our proof.
Lemma 1.
It holds that for any . Thus, .
Guessing opt.
Our first step is to guess the value of opt within a multiplicative factor of . That is, we would like to enumerate a polynomial number of candidate values, where the enumerated set will contain (at least one) value that is in the interval . We will show that for a guess in this interval we indeed find a solution of cost . The next lemma shows that it is possible to enumerate such a set of candidate values since . Our algorithm performs this enumeration.
Lemma 2.
There is a set consisting of values such that this set of values has at least one value in the interval .
In what follows, with a slight abuse of notation, we let opt be the value of this guessed candidate value (which is not smaller than opt and it is larger by at most a factor of if the guess is correct). We will show that if there is a feasible solution to Pr-makespan whose expected value of the cost is at most opt, then we will construct a feasible solution with expected value of the cost being at most . By the above lemma, this means that the problem admits an EPTAS as desired.
Rounding bag sizes.
Now, we provide a method to gain structure on the set of feasible solutions that we need to consider for finding a near optimal solution. Given a feasible solution, the size of bag denoted as is the total size of jobs assigned to this bag, that is, .
Lemma 3.
There exists a solution of cost at most in which the size of every non-empty bag is in the interval .
In what follows, we will increase the size of a bag to be the next value of the form for a non-negative integer (except for empty bags for which the allowed size remains zero). Thus, we will not use precise sizes for bags but upper bounds on sizes, and we call them allowed sizes. We let be this increased (allowed) size of bag , that is, . Since for every subset of bags, the total allowed size of the bags in the set is at most times the total size of the bags in the subset, we conclude that this rounding of the bag sizes will increase the cost of any solution by a multiplicative factor of at most (and therefore the expected value also increases by at most this factor). Thus, in what follows, we will consider only bag sizes that belong to the set . We use this set by Lemma 3. We conclude that the following holds.
Corollary 4.
If the guessed value (opt) is at least the optimal expected value of the makespan, then there is a solution of expected value of the makespan of at most that uses only bags with allowed sizes in .
Note that the allowed size of a bag may be slightly larger than the actual total size of jobs assigned to the bag. Later, the allowed size acts as an upper bound (without a lower bound), and we take the allowed size into account in the calculation of machine completion times in some cases (instead of the size).
Rounding job sizes.
We apply a similar rounding method for job sizes. Recall that by our guess, every job has size at most opt (see Lemma 1). Next, we apply the following modification to the jobs of sizes at most . Whenever there is a pair of jobs, each of which has size at most , we unite the two jobs (i.e., we delete the two jobs, adding a new job whose size is the sum of the two sizes of the two deleted jobs). This is equivalent to restricting our attention to solutions of the first stage where we add the constraint that the two (deleted) jobs must be assigned to a common bag. We repeat this process as long as there is such a pair. If there is an additional job of size at most , we delete it from the instance, and in the resulting solution (after applying the steps below on the resulting instance) we add the deleted job to bag . This is done after the entire algorithm completes running, so it may increase the makespan of every scenario and therefore the expected value of the makespan, but we do not take it into account in the algorithm or in calculating allowed sizes of bags. This addition of the deleted job increases the makespan of every scenario by at most , so the resulting expected value of the makespan will be increased by a multiplicative factor of at most . We consider the instance without this possible deleted job, and we prove the following.
Lemma 5.
The optimal expected value of the makespan of the instance resulting from the above modification of the job sizes among all solutions with allowed bag sizes in the following modified set is at most .
We next round up the size of every job to be the next value that is of the form for an integer . The size of every job cannot decrease and it may increase by a multiplicative factor of at most . Job sizes are still larger and the sizes do not exceed . Thus, the number of different job sizes is . We increase the allowed size of each non-empty bag by another multiplicative factor of , and round it up again to the next value of the form . Thus, the expected value of the makespan of a feasible solution increases by a multiplicative factor of at most , where one factor of is due to the rounding of jobs, and the second one is due to bringing allowed sizes back to the form . So by our guessing step, there is a feasible solution with expected value of the makespan of at most that uses only bags with allowed size in .
A bag is called tight if the set of jobs of this bag cannot use a bag of a smaller allowed size. Note that any solution where some bags are not tight can be converted into one where all bags are tight without increasing the cost. Note that the allowed size of a non-zero bag of allowed size can be written in the form , and since is integral, the allowed size of each bag is an integer multiple of .
Lemma 6.
For any tight bag it holds that the allowed size of the bag is at most times its actual size (the total size of jobs assigned to the bag, where their rounded sizes are considered).
Computing the maximum number of machines for which the assignment to bags does not matter.
In our algorithm, we would like to focus on values of for which the structure of bags is crucial. Now, we will detect values of for which there always exists a good assignment of bags to machines (for reasonable sets of bags). Let be the total size of all jobs (after the transformation applied on jobs for a fixed value of opt). The total size (not the allowed size) of all bags is , and we can compute the makespan based on the actual sizes of bags which are total sizes of jobs (after merging and rounding) assigned to the bags. The motivation is that for very small values of the maximum bag size is so small compared to that bags can be seen as very small jobs, and one can apply a greedy algorithm [22] for assigning the bags, and still obtain a solution with a small makespan (based on bag sizes).
Lemma 7.
If , then any set of bags (such that every job is assigned to a bag) where every bag has an allowed size (and size) not exceeding , leads to at least one schedule for machines with makespan in . For other values of , the makespan of an optimal schedule using tight bags (based on their allowed sizes) is at most .
We use the property that no bag has an allowed size above . Our algorithm computes the maximum value of for which holds, and afterwards it excludes values of for which holds. We will have that the cost of the solution obtained for the scenario is at most , since bag sizes satisfy the condition for bags in the statement of Lemma 7, and therefore we can use the first part of this lemma. This is a valid approach as adding a scenario where to the calculation of the approximation ratio will not increase the approximation ratio beyond the value , and the running time for computing a good solution for such a value of is polynomial in . Thus, we assume that holds for every .
We consider the remaining scenarios. In what follows, we consider the assignment of jobs to bags and the assignment of the bags to machines in each scenario subject to the condition that and the optimal makespan of the scenario is at most . In particular, it means that the number of bags of positive allowed sizes assigned to a machine in such a scenario is at most , since we consider allowed bag sizes not smaller than (if we exclude bags of allowed size zero).
Guessing an approximated histogram of the optimal makespan on all scenarios.
Our next guessing step is motivated by linear grouping of the values. To illustrate this step, consider a histogram corresponding to the costs of a fixed optimal solution satisfying all above structural claims, where the width of the bar corresponding to scenario is and its height is the value of the makespan in this scenario (that is, ). Observe that when is increased, the optimal makespan does not increase (without loss of generality, since the same assignment of bags can be used), so by plotting the bars in increasing order of we get a monotone non-decreasing histogram, where the total area of the bars is the expected value of the makespan of the optimal solution and the total width of all the bars is (since we assume that all scenarios are included in the histogram and every scenario satisfies the condition by scaling the probabilities of the remaining scenarios such that the total probability becomes ). We sort the indexes of scenarios with (strictly) positive probabilities ( values) and we denote by the resulting sorted list. For every , we compute the total width of the bars with indexes at most (using the ordering in , i.e., in increasing indexes), and denote this sum by . We also let . Thus, is the total probability of scenarios in , and is the total probability of scenarios in . The set does not contain scenarios with probability zero, but if , it holds that . According to the definitions, it holds that for , and the interval on the horizontal axis corresponds to scenario .
The idea of the next parts is to get an overestimator for the histogram by extending the value of to the right, and an underestimator by extending it to the left. The two areas below them will differ only by at most , so the overestimator can also differ from the original histogram area by at most this amount.
Next, we compute an upper bound histogram as follows. We start with selecting a sublist of . The motivation is that schedules will be computed later only for scenarios in , and they will be copied to scenarios of some of the larger indexes. The set acts as a representative set, and we show that it is possible to restrict ourselves to such a set. Since we increase upper bounds on the makespan for some scenarios, the feasibility is not harmed.
An index belongs to if there exists an integer such that . The motivation is that we would like to consider points which are integral multiples of and the set contains all values of for which such special values belong to the interval . Values of makespan will be increased such that the makespan function will still be piecewise linear, but it will have a smaller number of image values.
For every , the new upper bound histogram is defined as for all points with horizontal value between and up to where is the index just after in the sublist or up to the last point where the histogram is defined ( for the maximum value ) if is the largest element of . Since the original histogram was monotone non-increasing, the new histogram is pointwise not smaller than the original histogram. The possible modification is in the interval if is not the maximum value of , and in this case there may be a change for (that is, if ). If is the largest element of but not of , there may be a change for all such that . Thus, an upper bound on the total area below the histogram is not smaller than the expected value of the makespan of the optimal solution.
We use the modified histogram to obtain another bound. For that we see as a step function whose values are the corresponding points in the upper edge of the histogram. We define a new histogram by letting it be for all (and zero for cases that is undefined due to an argument above ). In this way we delete a segment of length from and we shift the resulting histogram to the left. Since the makespan for every scenario never exceeds , every point in has a height of at most and we deleted a segment of width of , the total area that we delete is at most . The resulting histogram is pointwise at most the original histogram (and thus also not larger than ). This property holds since every value was extended to the right for an interval not longer than . Thus, if we consider instead of the original histogram, we increase the cost of the solution by a multiplicative factor not larger than .
The guessing step that we use is motivated by this function . We let be the height of the histogram in the bar corresponding to the scenario . The relevant values of are those in , and the histogram only has values of the form for . We guess the histogram . This means to guess the optimal makespan of scenarios, where makespans can be one of at most different values. This holds since all allowed bag sizes are integer multiples of , so makespans are also integer multiples of , and the makespan of each scenario is at most . Thus, the number of possibilities of this guessing step is upper bounded by a function of which is .
In what follows, we consider the iteration of the algorithm defined below when we use the value of the guess corresponding to the optimal solution. Algorithmically, we will try all possibilities, check the subset of those for which we can provide a feasible solution, and output the best feasible solution obtained in this exhaustive enumeration. The running time of the next algorithm is multiplied by the number of possibilities.
The template-configuration integer program.
A template of a bag is a multiset of job sizes assigned to a common bag. We consider only multisets for which the total size of jobs is at most since the largest allowed size of any bag is smaller. Since the number of distinct job sizes (in the rounded instance) is upper bounded by and there are at most jobs assigned to a common bag (since the rounded size of each job is above ), we conclude that the number of templates is at most , which is a constant depending only on . The reason is that each bag has slots for jobs, such that each one may be empty or contain a job of one of the sizes. This calculation proves an upper bound on the number of possible templates, and we use a single template for every multiset of job sizes even when one multiset can be found in more than one way in the last calculation.
We are going to have a non-negative counter decision variable for every template , where this variable stands for the number of bags with template . Let be the set of templates, and assume that a template is represented by a vector. The length of such a vector is the number of different (rounded) job sizes, and for every index , the -th component of the vector of the template (denoted by ) is the number of jobs with size equal to the -th job size that are packed into a bag with this template. In order for such an assignment of the first stage to be feasible, we have the following constraints where denotes the number of jobs in the rounded instance whose size is the -th job size of the (rounded) instance.
The first constraint states that the number of bags is at most . If this number is strictly smaller than , then the remaining bags have allowed sizes of zero and they will not contain jobs. The second constraint, which is a family of constraints, states that the correct numbers of jobs of each size are assigned to templates, according to the number of copies of each template. Observe that the number of constraints in this family of constraints is a constant depending on (it is at most ), and all coefficients in the constraint matrix are (non-negative) integers not larger than .
We augment with the minimum index in if this index does not already belong to , in order to satisfy the condition that for every index of there is some index of that is not larger. Consider a scenario (satisfying the condition that ), and the optimal makespan of scenario , which is at most . Recall that in scenario the number of non-empty bags assigned to each machine is at most . Define a configuration of a machine in scenario to be a multiset of templates such that the multiset has at most templates (counting multiple copies of templates according to their multiplicities) and the total size of the templates encoded in the multiset is at most , which is the guess of a value of the histogram . The number of configurations is at most , and this is also an upper bound on the number of suitable configurations (whose total allowed size of bags does not exceed ). Since our mathematical program will not limit the makespan of any scenario, we use an upper bound for it by not allowing configurations whose total allowed sizes exceed the planned makespan for the scenario. We let denote the set of configurations for scenario , where is a vector of components where component for is the number of copies of template assigned to configuration . Components are non-negative integers not larger than . For scenario , we will have a family of non-negative decision variables (for all ) counting the number of machines with this configuration.
For each such scenario , we have a set of constraints each of which involves only the template counters (acting as a family of global decision variables) and the family of the -th local family of decision variables, namely the ones corresponding to this scenario. The family of constraints for the scenario are as follows.
Observe that the number of constraints in such a family of constraints is upper bounded by a constant depending only on (which is the number of possible templates plus ) and the coefficients in the constraint matrix are again all integers of absolute value at most a constant depending only on (at most .).
All decision variables are forced to be non-negative integers and we would like to solve the feasibility integer program defined above (with all constraints and decision variables). The right hand side vector consists of integers that are at most (using ). Such an integer linear program is a two-stage stochastic IP. Using the property that the number of scenarios in is upper bounded by a function of , we conclude that the integer linear program has a fixed dimension. Thus, we use Lenstra’s algorithm to solve it instead of an algorithm for two-stage stochastic IP.
We obtain a feasible solution if such a solution exists. Based on such a feasible solution, we assign jobs to bags using the variables. That is, for every , we schedule bags using template . By the constraint there are at most bags, and the other bags will be empty. Next, for every bag and every size of jobs in the rounded instance, if the template assigned to the bag has jobs of this size, we will assign jobs of this size to this bag. Doing this for all sizes and all bags is possible by the constraints , and these constraints ensure that all jobs are assigned to bags. Note that the modification for jobs consisted of merging small jobs. When such a merged job is assigned, this means that a subset of jobs is assigned. Reverting jobs to their original sizes does not increase any of the costs, since no rounding steps decreased any sizes. Next consider the assignment of bags to machines in each scenario. Consider a scenario , and let be the largest index in such that . Assign machines to configuration (for all ). It is possible by the constraint . If a configuration assigned to machine is supposed to pack copies of template , we pick a subset of bags whose assigned template is and assign these bags to machine . We do this for all templates and all machines. In this way we assign all bags by the constraints . If , at least one machine will not receive any configuration and therefore it will not receive any bags or jobs, and we refer to such a machine as empty. Since the configurations we used in scenario have a total size of jobs of at most , we conclude the following.
Corollary 8.
For every value of the guessed information for which the integer program has a feasible solution, there is a linear time algorithm that transform the feasible solution to the integer program into a solution to the rounded instance of Pr-makespan of cost at most .
Our scheme is established by noting that an optimal solution satisfying the assumptions of the guessed information provides us a multiset of templates all of which are considered in and a multiset of configurations (and all of them have total size of templates not larger than ) for which the corresponding counters satisfy the constraints of the integer program. Thus, we conclude our first main result.
Theorem 9.
Problem Pr-makespan admits an EPTAS.
3 An EPTAS for the Santa Claus problem
In this section we apply additional ideas to obtain an EPTAS for the second problem. We present the details of the scheme together with its analysis. In what follows, and similarly to the scheme to Pr-makespan, we will present steps, and in each of those steps the resulting approximation ratio of the scheme increases by at most a multiplicative factor of .
Preprocessing steps.
We consider an optimal solution opt, and analyze the information that one can guess in order to be able to approximate it. We assume without loss of generality that for all , since a solution for scenario can be used for scenario (by assigning the bags of machines in an arbitrary way). Recall that we can assume that for every value of in the support of the instance has at least non-zero sized jobs, since we assume . We will use the next lemma for our proof.
Lemma 10.
For any scenario (such that ) and an assignment of jobs to bags according to the solution opt, there exists a job for which it holds that .
We would like to split the sequence of scenarios into four parts (where some of the parts may be empty). The suffix (with the largest numbers of machines) will contain scenarios for which the assignment of bags is unimportant since the gain from them in the objective function value opt is small either because the probability is zero or because the number of machines is large. This suffix may be empty. There will also be a prefix (which could be empty) of machines for which the specific assignment to bags is unimportant because the number of machines is small and any reasonable assignment into bags allows a good schedule. For this prefix we will use different arguments from the ones we used for Pr-makespan. For the remaining scenarios, the prefix will consist of scenarios for which the gain is also small, and a suffix of the most important scenarios.
The first step is to guess the maximum scenario for which holds, out of scenarios with strictly positive probabilities, that is, such that holds. This index is well-defined since there exists an index for which and . There are at most possible values for this guess. While opt still denotes an optimal solution, we do not guess or use its value as a part of the algorithm. Let be equal to rounded down to the next integer power of . For a fixed job (see Lemma 10), the number of possible values for is . Since the index is also not known, the number of possible values for is .
The proof of the next lemma is obtained by selecting a set of consecutive scenarios minimizing the total weighted profit out of disjoint sequences of scenarios of similar forms. The proof of the lemma requires the knowledge not only of opt but of all values of the form . However, we use the lemma to obtain the information that given the correct value (this is a rounded value, so we will be able to guess it), there is a pair of values and a value that specify the only properties of opt that we use for our algorithm.
Lemma 11.
There is a value of which is an integer power of that satisfies and there exist two indexes such that , where every non-zero index has to be a scenario in the support of , such that the following conditions hold. , if , then ,.
The next guessing step.
In this step we guess several additional values. As mentioned above, we guess the value of (which is a power of and it is equal to within a multiplicative factor of since ), the value of (by guessing from the proof of Lemma 11, that is, we guess the power of ), and two indexes . That is, we would like to enumerate a set of polynomial number of candidate values of four components vectors that contains at least one vector with first component value that is , with a second component whose value is , and the two indexes in the third and fourth components of that vector. The next lemma shows that it is possible to enumerate such a set of candidate values. Our algorithm performs this enumeration.
Lemma 12.
There is a set consisting of vectors such that this set of vectors contains at least one vector with the required properties. Thus, the number of guesses including the guess of is at most .
In what follows, we let be the first component value of the guessed information, be the second component value of the guessed information, and be the two guessed scenarios (where it is possible that is not an actual scenario). We will show that if there is a feasible solution to Pr-SantaClaus for which the guessed information describes the solution and its expected value of the objective is opt, then we will construct a feasible solution with expected value of the objective being at least . This means that the problem admits an EPTAS as desired.
We let . Furthermore, for every scenario with , we set . From now on we drop the assumption that the sum of all values is and instead of that we use the assumption that these are non-negative numbers with sum at most . This modification of the vector () decreases the expected value of the Santa Claus value of the optimal solution by at most (and it does not increase the objective function value of any feasible solution to our problem). The justification for this is as follows. From Lemma 11 it follows that there is a quadruple of integers (where each integer belongs to the set that we test for this integer) for which there is a solution of profit at least with the first two properties stated in the lemma, and for every scenario such that the profit is at least zero.
Partitioning the input into two independent problems.
Next, we use the above guessing step in order to partition the input into two independent inputs. First, a job is called huge if its size is strictly larger than , that is, if (and otherwise it is non-huge).
We pack every huge job into its own bag and we let denote the number of huge jobs (where we will show that holds). Every huge job can cover one machine in every scenario regardless of its exact size (because for all these scenarios we consider solutions for which is not larger than ). On the other hand, the non-huge jobs (i.e., jobs of sizes at most ) will be packed into bags of size at most . The packing of the non-huge jobs into bags will be optimized according to the scenarios of indexes at least (and at most ), and we will ignore the scenarios of indexes smaller than when we pack the non-huge jobs into bags. The resulting bags of the non-huge jobs together with the set of the huge jobs (which are bags consisting of a single job) will be packed into machines (in the corresponding scenario ) based on existing EPTAS for the Santa Claus problem on identical machines (seeing bags as jobs). We will show that if indeed we use bags of sizes at most when we pack the non-huge jobs into bags, then the scenarios of indexes at most can be ignored. We show that this holds for any collection of bags of this form, that is, where every huge job has its own bag and other bags have total sizes of jobs not exceeding . Thus, even though the bags are defined based on scenarios where the number of machines is at least , still the scenarios with at most machines have good solutions. We will also show that it is possible to restrict ourselves to these types of collections of bags.
Lemma 13.
If all guesses are correct, any partition into bags has at least one bag with a total size of jobs no larger than . In particular, there are at most huge jobs.
Lemma 14.
An EPTAS for machines where which is applied on bags (instead of jobs) such that every huge job has its own bag and any additional bag has total size of jobs not exceeding finds a solution of profit at least .
Proof.
Consider an optimal solution for machines and the original jobs. We show that this schedule can be modified into a schedule of the bags, such that the profit of the schedule is smaller by a factor not exceeding . Thus, an optimal schedule of the bags is not worse, and by using an EPTAS the profit may decrease by another factor of . The adaptation of the schedule is as follows. The huge jobs are assigned as before, since each of them has its own bag. All non-huge jobs are removed, and each machine receives bags until it is not possible to add another bag without exceeding the previous load or no unassigned bags remain. Given the property that the total size of bags is equal to the total size of jobs, it is either the case that the loads are equal to previous loads (in which case the value is unchanged) or there is at least one unassigned bag. In the latter case, no machine load decreased by more than an additive term of , and therefore the value is at least the previous one minus . To complete the assignment, all remaining bags are assigned arbitrarily. Since and the resulting value is at least , we find by that .
Lemma 15.
Consider the instance of our problem when we let for all and for . There exists a partition into bags where a profit at least can be obtained for any , the set of bags satisfies that every huge job is packed into its own bag, and each other bag consists of non-huge jobs of total size at most .
Proof.
The number of partitions into bags is finite for fixed (it does not exceed ). Consider the set of partitions for which a solution of profit at least can be obtained for any . There is at least one such partition for the correct guess. For every partition it is possible to define a vector of components, such that total sizes of bags appear in a non-increasing order. Consider the partition among the considered partitions for which the number of huge jobs that have their own bags is maximum, and out such partitions, one where the vector is lexicographically minimal. We claim that this partition satisfies the requirements.
Assume by contradiction that the number of huge jobs that do not have their own bags is not . Consider a bag with a huge job that contains at least one additional job. This will be named the first bag. Consider a bag whose total size is at most , which must exist due to Lemma 13. This last bag does not have a huge job since the size of a huge job is above , and we call it the second bag. Move all contents of the first bag into the second bag excluding one huge job. For any , since , the assignment of bags to machines does not reduce the objective function value below . This holds because there is at most one machine whose total size was reduced (for every ), but if such a machine exists, it still has a huge job whose size is above . Thus, the new partition is also one of the considered partitions. The new partition has a larger number of huge jobs assigned into their own bags, because the second bag was not such a bag and the first bag became such a bag. This contradicts the choice of assignment to bags. Thus, the partition into bags consists of bags with huge jobs and bags with non-huge jobs.
Next, consider only the components of the vector which do not correspond to bags of huge jobs. This vector is also minimal lexicographically out of vectors for partitions of the non-huge jobs into bags. Assume by contradiction that the first component is above . Consider the bag of the first component (called the first bag now) and a bag with a total size at most (called the second bag now). Move one job from the first bag to the second bag. The second bag now has a total size of at most (since a non-huge job was moved). The first bag now has a smaller total size, but still larger than . The sorted vector is now smaller lexicographically, and it is still possible to obtain a solution of profit at least for any , similarly to the proof given here for huge jobs, which is a contradiction.
In summary, we can focus on the scenarios interval , assume that huge jobs have their own bags, and remaining bags have total sizes not exceeding .
Modifying the input of scenarios with indexes at least and at most .
Motivated by the last partitioning of the original input into four parts, and the fact that only scenarios with indexes at least and at most need to be considered, we apply the following transformation. First, for every we let , in the sense we can augment every solution to the remaining instance (with only a subset of scenarios) into an EPTAS for the original instance before this change to . Furthermore, for every we let in the sense we can ignore the profit of such scenarios. Then, every huge job is packed into a separate bag. Such a bag suffices to cover one machine in every remaining scenario. Therefore, our second step in the transformation is the following one. We delete the huge jobs from , we decrease the index of each remaining scenario by (in particular the new indexes in the support of will be in the interval ), and we enforce the condition that every bag size is at most .
As one can see, the transformations here are more complicated compared to those used in the previous section, and they have to be carefully designed and analyzed. However, now we are ready to apply methods that resemble the previous section. Using the fact that for every remaining scenario we have that is between and , and the fact that we are able to apply the methods we have developed for the makespan minimization problem to obtain an EPTAS for Pr-SantaClaus, as we do next.
Rounding bag sizes.
We next provide a method to gain structure on the set of feasible solutions that we need to consider for finding a near optimal solution. Given a feasible solution, the size of bag denoted as is the total size of jobs assigned to this bag, that is, .
Lemma 16.
There exists a solution of value at least in which the size of every non-empty bag is in the interval .
In what follows, we will decrease the size of a bag to be the next value of the form for an integer (except for empty bags for which the allowed size remains zero). Thus, we will not use precise sizes for bags but lower bounds on sizes, and we call them “allowed sizes” in what follows. For bags of allowed size zero the meaning remains that such a bag is empty. We let be this decreased (allowed) size of bag , that is, . The allowed size is not smaller than and it is smaller than the size by an additive term of at most . Thus, for every subset of bags, the total allowed size of the bags in the set is at least times the total size of the bags in the subset, we conclude that this rounding of the bag sizes will decrease the value of any solution by a multiplicative factor of at most (and therefore the expected value also decreases by at most this factor), and it will not increase the value of a solution for any scenario. Thus, in what follows, we will consider only bag sizes that belong to the set . We use this set by Lemma 16. We conclude that the following holds.
Corollary 17.
If the guessed information vector is satisfied by an optimal solution, then there is a solution of expected value of the Santa Claus value of at least that uses only bags with allowed sizes in .
Note that the allowed size of a bag may be slightly smaller than the actual total size of jobs assigned to the bag. Later, the allowed size acts as a lower bound (without an upper bound), and we sometimes take the allowed size into account in the calculation of machine completion times.
Rounding job sizes.
We apply a similar rounding method for job sizes. Recall that by our transformation, every job has size at most (since huge jobs were removed from the input). Next, we apply the following modification to the jobs of sizes at most . While there is a pair of jobs of sizes at most , we unite such a pair of jobs. If there is an additional job of size at most , we delete it from the instance, and in the resulting solution (after applying the algorithm below on the resulting instance) we add the deleted job to an arbitrary bag. This deletion of the deleted job decreases the Santa Claus value of every scenario by at most , so the resulting expected value of the Santa Claus value will be decreased by at most . Adding the job back does not decrease the value. We consider the instance without this possibly deleted job, and we prove the following.
Lemma 18.
The optimal expected value of the Santa Claus value of the instance resulting from the above modification of the job sizes among all solutions with allowed bag sizes in the following modified set is at least .
Next, we round down the size of every job to be the next value that is of the form for an integer . The size of every job cannot increase and it may decrease by a multiplicative factor of at most . Job sizes are still larger than and not larger than . Thus, the number of different job sizes is . We decrease the allowed size of each bag by another multiplicative factor of . Bags sizes are rounded down again to the next value of the form , and since the smallest allowed size was and , the allowed bag sizes become , and the expected value of the Santa Claus value of a feasible solution decreases by a multiplicative factor of at most . By our guessing step and our transformation, there is a feasible solution with expected value of the Santa Claus value of at least .
A bag is called tight if the set of jobs of this bag cannot use a bag of a larger allowed size. Note that any solution where some bags are not tight can be converted into one where all bags are tight without decreasing the expected value of the objective.
Lemma 19.
For any tight bag it holds that the allowed size of the bag is at least times its actual size (the total size of jobs assigned to the bag, such that their rounded sizes are considered).
The final steps of the EPTAS for Pr-SantaClaus.
Our next step is to guess an approximated histogram of the optimal Santa Claus value in all scenarios. This step and the next step of formulating a template-configuration integer program of fixed dimension are similar to the ones we have established for Pr-makespan. Using these additional steps we manage to prove our second main result.
Theorem 20.
Problem Pr-SantaClaus admits an EPTAS.
References
- [1] N. Alon, Y. Azar, G. J. Woeginger, and T. Yadid. Approximation schemes for scheduling. In Proceedings of the 8th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’97), pages 493–500, 1997.
- [2] N. Alon, Y. Azar, G. J. Woeginger, and T. Yadid. Approximation schemes for scheduling on parallel machines. Journal of Scheduling, 1(1):55–66, 1998.
- [3] B. Awerbuch, Y. Azar, E. F. Grove, M.-Y. Kao, P. Krishnan, and J. S. Vitter. Load balancing in the norm. In Proc. of the 36th Annual Symposium on Foundations of Computer Science (FOCS’95), pages 383–391, 1995.
- [4] Y. Azar, L. Epstein, and R. van Stee. Resource augmentation in load balancing. Journal of Scheduling, 3(5):249–258, 2000.
- [5] E. Balkanski, T. Ou, C. Stein, and H.-T. Wei. Scheduling with speed predictions. In Proc. of the 21st International Workshop on Approximation and Online Algorithms (WAOA’23), pages 74–89, 2023.
- [6] N. Bansal and K. R. Pruhs. Server scheduling to balance priorities, fairness, and average quality of service. SIAM Journal on Computing, 39(7):3311–3335, 2010. doi:10.1137/090772228.
- [7] N. Bansal and M. Sviridenko. The Santa Claus problem. In Proc. of the 38th Annual ACM Symposium on Theory of Computing (STOC’06), pages 31–40, 2006.
- [8] M. Brehob, E. Torng, and P. Uthaisombut. Applying extra-resource analysis to load balancing. Journal of Scheduling, 3(5):273–288, 2000.
- [9] M. Buchem, F. Eberle, H. K. Kasuya Rosado, K. Schewior, and A. Wiese. Scheduling on a stochastic number of machines. In Proc. of the 27th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX’24), pages 14:1–14:15, 2024.
- [10] A. K. Chandra and C. K. Wong. Worst-case analysis of a placement algorithm related to storage allocation. SIAM Journal on Computing, 4(3):249–263, 1975. doi:10.1137/0204021.
- [11] L. Chen, K. Jansen, and G. Zhang. On the optimality of exact and approximation algorithms for scheduling problems. Journal of Computer and System Sciences, 96:1–32, 2018. doi:10.1016/J.JCSS.2018.03.005.
- [12] R. A. Cody and E. G. Coffman Jr. Record allocation for minimizing expected retrieval costs on drum-like storage devices. Journal of the ACM, 23(1):103–115, 1976. doi:10.1145/321921.321933.
- [13] E. G. Coffman Jr., M. R. Garey, and D. S. Johnson. An application of bin-packing to multiprocessor scheduling. SIAM Journal on Computing, 7(1):1–17, 1978. doi:10.1137/0207001.
- [14] J. Csirik, H. Kellerer, and G. Woeginger. The exact LPT-bound for maximizing the minimum completion time. Operations Research Letters, 11(5):281–287, 1992. doi:10.1016/0167-6377(92)90004-M.
- [15] B. L. Deuermeyer, D. K. Friesen, and M. A. Langston. Scheduling to maximize the minimum processor finish time in a multiprocessor system. SIAM Journal on Discrete Mathematics, 3(2):190–196, 1982.
- [16] R. G. Downey and M. R. Fellows. Parameterized Complexity. Springer-Verlag, Berlin, 1999.
- [17] S. Dye, L. Stougie, and A. Tomasgard. The stochastic single resource service-provision problem. Naval Research Logistics, 50(8):869–887, 2003.
- [18] F. Eberle, R. Hoeksma, N. Megow, L. Nölke, K. Schewior, and B. Simon. Speed-robust scheduling: sand, bricks, and rocks. Mathematical Programming, 197(2):1009–1048, 2023. doi:10.1007/S10107-022-01829-0.
- [19] W. Fernandez de la Vega and G. S. Lueker. Bin packing can be solved within in linear time. Combinatorica, 1(4):349–355, 1981.
- [20] D. K. Friesen. Tighter bounds for the multifit processor scheduling algorithm. SIAM Journal on Computing, 13(1):170–181, 1984. doi:10.1137/0213013.
- [21] D. K. Friesen and B. L. Deuermeyer. Analysis of greedy solutions for a replacement part sequencing problem. Mathematics of Operations Research, 6(1):74–87, 1981. doi:10.1287/MOOR.6.1.74.
- [22] R. L. Graham. Bounds for certain multiprocessing anomalies. Bell System Technical Journal, 45(9):1563–1581, 1966.
- [23] R. L. Graham. Bounds on multiprocessing timing anomalies. SIAM Journal of Applied Mathematics, 17(2):416–429, 1969.
- [24] D. S. Hochbaum. Various notions of approximations: Good, better, best and more. In D. S. Hochbaum, editor, Approximation algorithms. PWS Publishing Company, 1997.
- [25] D. S. Hochbaum and D. B. Shmoys. Using dual approximation algorithms for scheduling problems: theoretical and practical results. Journal of the ACM, 34(1):144–162, 1987. doi:10.1145/7531.7535.
- [26] K. Jansen, K.-M. Klein, and J. Verschae. Closing the gap for makespan scheduling via sparsification techniques. Mathematics of Operations Research, 45(4):1371–1392, 2020. doi:10.1287/MOOR.2019.1036.
- [27] R. Kannan. Improved algorithms for integer programming and related lattice problems. In Proc. of the 15th annual ACM Symposium on Theory of Computing (STOC’83), pages 193–206, 1983.
- [28] H. W. Lenstra Jr. Integer programming with a fixed number of variables. Mathematics of Operations Research, 8(4):538–548, 1983. doi:10.1287/MOOR.8.4.538.
- [29] J. Y.-T. Leung and W.-D. Wei. Tighter bounds on a heuristic for a partition problem. Information Processing Letters, 56(1):51–57, 1995. doi:10.1016/0020-0190(95)00099-X.
- [30] J. Minařík and J. Sgall. Speed-robust scheduling revisited. In Proc. of the 27th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX’24), pages 8:1–8:20, 2024. doi:10.4230/LIPIcs.APPROX/RANDOM.2024.8.
- [31] M. Mitzenmacher and S. Vassilvitskii. Algorithms with predictions. Communications of the ACM, 65(7):33–35, 2022. doi:10.1145/3528087.
- [32] K. Rustogi and V. A. Strusevich. Parallel machine scheduling: Impact of adding extra machines. Operations Research, 61(5):1243–1257, 2013. doi:10.1287/OPRE.2013.1208.
- [33] C. Stein and M. Zhong. Scheduling when you do not know the number of machines. ACM Transactions on Algorithms, 16(1):9:1–9:20, 2020. doi:10.1145/3340320.
- [34] G. J. Woeginger. A polynomial time approximation scheme for maximizing the minimum machine completion time. Operations Research Letters, 20(4):149–154, 1997. doi:10.1016/S0167-6377(96)00055-7.