Abstract 1 Introduction 2 Preliminaries 3 Hardness of 𝑷𝒓𝒋,𝒑𝒋=𝒑𝑼𝒋 4 New Analysis of Known Algorithm for 𝑷𝒓𝒋,𝒑𝒋=𝒑𝒘𝒋𝑼𝒋 5 FPT-Algorithm for 𝑷𝒓𝒋,𝒑𝒋=𝒑𝒘𝒋𝑼𝒋 6 Conclusion and Future Work References

Minimizing the Number of Tardy Jobs with Uniform Processing Times on Parallel Machines

Klaus Heeger ORCID Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel Hendrik Molter ORCID Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
Abstract

In this work, we study the computational (parameterized) complexity of Prj,pj=pwjUj. Here, we are given m identical parallel machines and n jobs with equal processing time, each characterized by a release date, a due date, and a weight. The task is to find a feasible schedule, that is, an assignment of the jobs to starting times on machines, such that no job starts before its release date and no machine processes several jobs at the same time, that minimizes the weighted number of tardy jobs. A job is considered tardy if it finishes after its due date.

Our main contribution is showing that Prj,pj=pUj (the unweighted version of the problem) is NP-hard and W[2]-hard when parameterized by the number of machines. The former resolves an open problem in Note 2.1.19 by Kravchenko and Werner [Journal of Scheduling, 2011] and Open Problem 2 by Sgall [ESA, 2012], and the latter resolves Open Problem 7 by Mnich and van Bevern [Computers & Operations Research, 2018]. Furthermore, our result shows that the known XP-algorithm by Baptiste et al. [4OR, 2004] for Prj,pj=pwjUj parameterized by the number of machines is optimal from a classification standpoint.

On the algorithmic side, we provide alternative running time bounds for the above-mentioned known XP-algorithm. Our analysis shows that Prj,pj=pwjUj is contained in XP when parameterized by the processing time, and that it is contained in FPT when parameterized by the combination of the number of machines and the processing time. Finally, we give an FPT-algorithm for Prj,pj=pwjUj parameterized by the number of release dates or the number of due dates. With this work, we lay out the foundation for a systematic study of the parameterized complexity of Prj,pj=pwjUj.

Keywords and phrases:
Scheduling, Identical Parallel Machines, Weighted Number of Tardy Jobs, Uniform Processing Times, Release Dates, NP-hard Problems, Parameterized Complexity
Funding:
Klaus Heeger: Supported by the ISF, grant No. 1070/20.
Hendrik Molter: Supported by the ISF, grant nr. 1470/24 and by the European Union’s Horizon Europe research and innovation program under grant agreement 949707.
Copyright and License:
[Uncaptioned image] © Klaus Heeger and Hendrik Molter; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Parameterized complexity and exact algorithms
; Mathematics of computing Discrete mathematics ; Computing methodologies Planning and scheduling
Editors:
Olaf Beyersdorff, Michał Pilipczuk, Elaine Pimentel, and Nguyễn Kim Thắng

1 Introduction

Machine scheduling is one of the most fundamental application areas of combinatorial optimization [34]. In a typical scheduling problem, the task is to assign jobs to machines with the goal of maximizing a certain optimization objective while complying with certain constraints. Jobs are usually characterized by a processing time, a release date, a due date, and a weight (or a subset thereof). We consider the setting where we have access to several identical parallel machines that can each process one job (non-preemptively) at a time. One of the most fundamental optimization objectives is to minimize the weighted number of tardy jobs, where a job is considered tardy if it is completed after its due date.

The arguably simplest scheduling problem aims to minimize the (unweighted) number of tardy jobs on a single machine, where all jobs are released at time zero. In the standard three-field notation for scheduling problems by Graham et al. [15], this problem is called 1Uj. It can be solved in polynomial time by a classic algorithm of Moore [33]. However, this problem becomes NP-hard when weights are introduced, the number of machines is increased, or release dates are added.

  • The weighted version, 1wjUj, is one of the first scheduling problems shown to be (weakly) NP-hard. It remains hard even if all jobs have the same due date, as in this case, it is equivalent to the well-known Knapsack problem, which was already included in Karp’s famous list of 21 NP-hard problems [21]. The problem can be solved in pseudopolynomial time with a classic algorithm by Lawler and Moore [28].

  • Adding a second machine leads to the problem 2jUj, which is (weakly) NP-hard even if all jobs have the same due date, as it is a generalization of the well-known Partition problem, which was also already included in Karp’s list of 21 NP-hard problems [21]. If the number of machines is unbounded, the problem is called PjUj and it is strongly NP-hard even if all jobs have the same due date, as it generalizes the well-known Bin Packing problem [14].

  • Introducing release times leads to the problem 1rjjUj, which is weakly NP-hard [30], even if there are only two different release dates and two different due dates. The reduction of Lenstra et al. [30] can be extended in a straightforward way to show that 1rjUj is strongly NP-hard.

Problem Setting and Motivation.

We consider the case where release dates and weights are present and we have multiple identical parallel machines. However, we add the restriction that all processing times are the same. This problem is called Prj,pj=pwjUj, we give a formal definition in Section 2. This problem arises naturally in manufacturing systems, where

  • exact specifications by the customers for the product have negligible influence on the production time, but

  • the specifications only become available at certain times and customers request the product to meet certain due dates.

As an illustrative example, consider the problem of scheduling burn-in operations in integrated circuit manufacturing. The specification of the layout of the circuit may only become available at a certain time, as it takes time to optimize it. At the same time, the specific layout has little to no influence on the processing time of the burn-in operation [31]. Furthermore, the customer may wish to have the circuit delivered at a certain due date.

To the best of our knowledge, the only known algorithmic result for Prj,pj=pwjUj is a polynomial-time algorithm by Baptiste et al. [2, 3] for the case where the number of machines is a constant. However, two special cases are known to be polynomial-time solvable. It is folklore that the case without release dates, Ppj=pjwjUj, and the case where the processing times equal one, Prj,pj=1jwjUj, can both be reduced to the Linear Assignment problem in a straightforward manner. The Linear Assignment problem is known to be solvable in polynomial time [27]. Furthermore, it is known that, given an instance of Prj,pj=pwjUj, we can check in polynomial time whether all jobs can be scheduled such that no job is tardy [5, 37, 38].

Our Contribution.

The complexity status of Prj,pj=pwjUj and its unweighted version Prj,pj=pUj was a longstanding open problem. Kravchenko and Werner [25] pointed out that this question remains unanswered in Note 2.1.19 and Sgall [36] listed this issue as Open Problem 2. Our main contribution is to resolve the complexity status of Prj,pj=pUj (and hence also Prj,pj=pwjUj) by showing the following.

  • Prj,pj=pUj is NP-hard.

Having established NP-hardness, we focus on studying the parameterized complexity of Prj,pj=pwjUj. As mentioned before, Baptiste et al. [2, 3] showed that the problem is in XP when parameterized by the number of machines. Mnich and van Bevern [32, Open Problem 7] asked whether this result can be improved to an FPT algorithm. We answer this question negatively by showing the following.

  • Prj,pj=pUj is W[2]-hard when parameterized by the number of machines.

On the positive side, we give several new parameterized tractability results. By providing an alternative running time analysis of the algorithm for Prj,pj=pwjUj by Baptiste et al. [2, 3], we show the following.

  • Prj,pj=pwjUj is in XP when parameterized by the processing time.

  • Prj,pj=pwjUj is in FPT when parameterized by the combination of the number of machines and the processing time.

Finally, we give a new algorithm based on a mixed integer linear program (MILP) formulation for Prj,pj=pwjUj. We show the following.

  • Prj,pj=pwjUj is in FPT when parameterized by the number of different release dates.

  • Prj,pj=pwjUj is in FPT when parameterized by the number of different due dates.

We conclude by pointing to new future research directions. Most prominently, we leave open whether Prj,pj=pwjUj is in FPT or W[1]-hard when parameterized by the processing time.

Related Work.

We give an overview of the literature investigating the parameterized complexity of minimizing the weighted number of tardy jobs in various related settings.

The problem of minimizing the weighted number of tardy jobs on a single machine, 1jwjUj has been extensively studied in the literature under various aspects and constraints. Hermelin et al. [20] showed that the classical pseudopolynomial time algorithm by Lawler and Moore [28] can be improved in several special cases. Hermelin et al. [19] give an overview of the parameterized complexity of 1jwjUj with respect to the parameters “number of processing times”, “number of due dates”, and “number of weights” (and their combinations). In particular, 1wjUj is in XP when parameterized by the number of different processing times [19]. This presumably cannot be improved to an FPT result as recently, it was shown that 1wjUj parameterized by the number of different processing times is W[1]-hard [16]. Faster algorithms are known for the case where the job weights equal the processing times [4, 12, 23, 35] and the problem has also been studied under fairness aspects [17]. Kaul et al. [22] extended the results of Hermelin et al [19] to 1rjwjUj, considering the “number of release times” as an additional parameter.

Minimizing the weighted number of tardy jobs on parallel machines has mostly been studied in the context of interval scheduling (Pdjrj=pjjwjUj) and generalizations thereof [1, 18, 26, 39].

The setting where processing times are assumed to be uniform has been studied under various optimization criteria (different from minimizing the weighted number of tardy jobs) and constraints. For an overview, we refer to Kravchenko and Werner [25] and Baptiste et al. [3]. The even more special case of unit processing times has also been extensively studied. Reviewing the related literature in this setting is out of scope for this work.

2 Preliminaries

Scheduling.

Using the standard three-field notation for scheduling problems by Graham et al. [15], the problem considered in this work is called Prj,pj=pwjUj. In this problem, we have n jobs and m machines. Each machine can process one job at a time. Generally, we use the variable j to denote jobs and the variable i to denote machines. Each job j has a processing time pj=p, a release date rj, a due date dj, and a weight wj, where p, rj, dj, and wj are nonnegative integers. We use r#, d#, and w# to denote the number of different release dates, due dates, and weights, respectively.

A schedule maps each job j to a combination of a machine i and a starting time t, indicating that j shall be processed on machine i starting at time t. More formally, a schedule is a function σ:{1,,n}{1,,m}×. If for job j we have σ(j)=(i,t), then job j is scheduled to be processed on machine i starting at time t until time t+p. A schedule σ is feasible if there is no job j with σ(j)=(i,t) and t<rj and if there is no pair of jobs j,j with σ(j)=(i,t) and σ(j)=(i,t) such that |tt|<p. We say that a job j is early in a feasible schedule σ if σ(j)=(i,t) and t+pdj, otherwise we say that job j is tardy. We say that machine i is idle at time t in a feasible schedule σ if there is no job j with σ(j)=(i,t) and ttt+p. The goal is to find a feasible schedule that minimizes the weighted number of tardy jobs or, equivalently, maximizes the weighted number of early jobs W=jσ(j)=(i,t)t+pdjwj. We call a feasible schedule that maximizes the weighted number of early jobs optimal. Formally, the problem is defined as follows.

Prj,pj=pwjUj

Input:

A number n of jobs, a number m of machines, a processing time p, a list of release dates (r1,r2,,rn), a list of due dates (d1,d2,,dn), and a list of weights (w1,w2,,wn).

Task:

Compute a feasible schedule σ that maximizes W=jσ(j)=(i,t)t+pdjwj.

We use Prj,pj=pUj to denote the unweighted (or, equivalently, uniformly weighted) version of Prj,pj=pwjUj, that is, the case where wj=wj for every two jobs j and j, or equivalently, w#=1.

Note that given any feasible schedule of the early jobs, one can easily extend this to a feasible schedule of all jobs as tardy jobs can be scheduled arbitrarily late. Thus, when describing a schedule, we will only describe how the early jobs are scheduled.

Given an instance I of Prj,pj=pwjUj, we can make the following observation, essentially implying that one can switch the roles of release dates and due dates.

Observation 1.

Let I be an instance of Prj,pj=pwjUj and let dmax be the largest due date of any job in I. Let I be the instance obtained from I by setting rj=dmaxdj and dj=dmaxrj. Then I admits a feasible schedule where the weighted number of early jobs is W if and only if I admits a feasible schedule where the weighted number of early jobs is W.

To see that 1 is true note that a feasible schedule σ for I can be transformed into a feasible schedule σ for I (with the same weighted number of early jobs) by setting σ(j)=(i,dmaxtp), where σ(j)=(i,t).

We now show that we can restrict ourselves to schedules where jobs may start only at “few” different points in time, which will be useful in our proofs. In order to do so, we define a set 𝒯 of relevant starting time points.

𝒯={trj and  0n s.t. t=rj+p}

It is known that there always exists an optimal schedule where the starting times of all jobs are in 𝒯.

Lemma 2 ([2, 3]).

Let I be an instance of Prj,pj=pwjUj. Then there exists a feasible schedule σ that maximizes the weighted number of early jobs such that for each job j we have σ(j)=(i,t) for some t𝒯.

Parameterized Complexity.

We use the following standard concepts from parameterized complexity theory [7, 11, 13]. A parameterized problem LΣ× is a subset of all instances (x,k) from Σ×, where k denotes the parameter. A parameterized problem L is in the class FPT (or fixed-parameter tractable) if there is an algorithm that decides every instance (x,k) for L in f(k)|x|O(1) time for some computable function f that depends only on the parameter. A parameterized problem L is in the class XP if there is an algorithm that decides every instance (x,k) for L in |x|f(k) time for some computable function f that depends only on the parameter. If a parameterized problem L is W[1]-hard or W[2]-hard, then it is presumably not contained in FPT [7, 11, 13].

3 Hardness of 𝑷𝒓𝒋,𝒑𝒋=𝒑𝑼𝒋

In this section, we present our main result, namely that the unweighted version of our scheduling problem, Prj,pj=pUj, is NP-hard and W[2]-hard when parameterized by the number m of machines. The former resolves an open problem in Note 2.1.19 by Kravchenko and Werner [25] and Open Problem 2 by Sgall [36], and the latter resolves Open Problem 7 by Mnich and van Bevern [32].

Theorem 3.

Prj,pj=pUj is NP-hard and W[2]-hard parameterized by the number m of machines.

In order to show Theorem 3, we present a parameterized reduction from Hitting Set parameterized by solution size k, which is known to be NP-hard [21] (unparameterized) and W[2]-hard [10].

Hitting Set

Input:

A finite set U={u0,,un1}, a set 𝒜={A0,,Am1} of subsets of U, and an integer k.

Question:

Is there a hitting set of size k, that is, a set XU with |X|=k and XAj for every j{0,,m1}?

Let I=(U={u0,,un1},𝒜={A0,,Am1},k) be an instance of Hitting Set. In order to ease the presentation, we give jobs names rather than identifying them with natural numbers. Furthermore, for a job J, we use r(J) to denote the release date of J, and we use d(J) to denote the deadline of J.

Our reduction will have k machines. The main idea behind the reduction is as follows. Each machine acts as a “selection gadget”, that is, we will interpret the jobs scheduled on each machine in an optimal schedule as the selection of a particular element of U to be included in the hitting set. As there are k machines, this ensures that the (hitting) set consisting of the selected elements has size at most k. Intuitively, we want that selecting element ui on a machine corresponds to all jobs on this machine starting at time i modulo p in an optimal schedule. For each set Aj𝒜, there are two kinds of jobs.

  • First, jobs JAj,ui for each uiAj, where scheduling job JAj,ui encodes that the element ui is selected to be part of the hitting set.

  • Second, there are k1 dummy jobs DAj which can be scheduled on the up to k1 machines not corresponding to elements of Aj.

We give the jobs JAj,ui and DAj release dates and due dates such that they are the only jobs that can be started in the interval [jp,(j+1)p1] and are early. See Figure 1 for an illustration. Intuitively, this makes sure that an optimal solution has to schedule one of these jobs on each machine. In particular, one job JAj,ui is scheduled, implying that Aj is hit by one of the selected elements. See Figure 2 for an illustration.

Figure 1: The jobs of the first reduction approach for the Hitting Set instance I=({u1,u2,u3,u4,u5,u6},{A1={u1,u2,u3},A2={u3,u4,u5},A3={u1,u5,u6},k=2).
Figure 2: An optimal schedule for the instance from Figure 1 representing the hitting set {u1,u3}.
Figure 3: An optimal schedule for the instance from Figure 1 which does not represent a hitting set.

There is, however, one problem with the reduction as sketched above. We do not ensure that all early jobs scheduled on some machine start at the same time modulo p. Thus, it is possible to schedule e.g. first job JA1,u2 on machine 1 and then job JA2,u3 such that the two jobs are both early. Machine 1 now does not encode the selection of a single element to the hitting set. We illustrate an example in Figure 3. However, note that it is only possible to increase the index of the “selected” element, and as there are only k machines, the total increase is bounded by k(n1). Consequently, repeating the reduction sketched above k(n1)+1 times ensures that at least one of the repeated instances will select only one item per machine, and then this instance correctly encodes a hitting set of size k.

We now describe the reduction in detail. Formally, given the instance I of Hitting Set, we construct an instance I of Prj,pj=pUj as follows. We set the processing time to p=2n. We construct the following jobs for each Aj𝒜 and {0,,k(n1)}:

  • one job JAj,ui for each uiAj with release date r(JAj,ui)=(m+j)p+i and due date d(JAj,ui)=r(JAj,ui)+p, and

  • k1 dummy jobs DAj with release date r(DAj)=(m+j)p and due date d(DAj)=r(DAj)+p+n.

Finally, we set the number of machines to k. This finishes the construction. For an overview of the due dates and release dates of the jobs, see Table 1. We can easily observe the following.

Table 1: Overview of the release dates and due dates of the jobs created for each Aj𝒜 and {0,,k(n1)}.

jobrelease datedue datemultiplicityJAj,ui(m+j)p+i(m+j+1)p+i1DAj(m+j)p(m+j+1)p+nk1

Observation 4.

Given an instance I of Hitting Set, the above-described instance I of Prj,pj=pUj can be computed in polynomial time and has k machines.

We continue by showing the correctness of the reduction. More specifically, we show that the Hitting Set I instance is a yes-instance if and only if the constructed instance I of Prj,pj=pUj admits a feasible schedule with (k(n1)+1)mk early jobs. We split the proof into the forward and backward direction. We start with the forward direction.

Lemma 5.

If the Hitting Set instance I admits a hitting set of size k, then the Prj,pj=pUj instance I admits a feasible schedule with (k(n1)+1)mk early jobs.

Proof.

Let X={ui1,,uik} be a hitting set of size k for I. We construct a schedule for I as follows. On machine q, for each {0,,k(n1)} and j{0,,m1}, we schedule one job to start at time (m+j)p+iq. This job is JAj,uiq if uiqAj, and DAj otherwise. Because X is a hitting set, we have that uiqAj for some q{1,,k} and hence we schedule each dummy job DAj at most k1 times, that is, at most its multiplicity times.

We can observe that all jobs scheduled so far are early. For each job JAj,uiq that is scheduled on machine q, we have set its starting time to (m+j)p+iq which equals this job’s release date (cf. Table 1). Furthermore, job JAj,uiq finishes at (m+j+1)p+iq, its deadline. Each dummy job DAj is early as well, since their release times are smaller or equal to the release time of JAj,uiq, and their due dates are larger or equal to the due date of JAj,uiq. Furthermore, we can observe that there is no overlap in the processing times between any two jobs scheduled on machine q.

It follows that we have feasibly scheduled (k(n1)+1)mk such that they finish early. We schedule the remaining jobs in some arbitrary way such that the schedule remains feasible.

Next, we continue with the backward direction.

Lemma 6.

If the Prj,pj=pUj instance I admits a feasible schedule with (k(n1)+1)mk early jobs, then the Hitting Set instance I admits a hitting set of size k.

Proof.

Let σ be a feasible schedule for I with (k(n1)+1)mk early jobs. Since the largest due date is (k(n1)m+m)p+n and p=2n, it follows that on each machine, at most k(n1)m+m jobs can be scheduled in a feasible way such that they finish early. Consequently, on each machine, there are exactly (k(n1)+1)m early jobs.

More specifically, for each machine, each {0,,k(n1)} and j{0,,m1}, there must be one job which starts in the interval [(m+j)p,(m+j)p+n]. Otherwise, there would be less than (k(n1)+1)m early jobs on that machine. For machine q, let xq{1,,n} such that there is a job on machine q which starts at time mp+xq (the existence of such a job is guaranteed by the previous observation). Then we must have 1x0qx1qxk(n1)qn. This follows from the observation that if xq>x+1q, then the starting time of the job corresponding to x+1q would be earlier than the completion time of the job corresponding to xq and hence the schedule would be infeasible. Consequently, there are at most n1 values of such that xqx+1q. As there are k machines, this implies that there are at most k(n1) values such that xqx+1q for some machine q. We can conclude that there exists at least one {0,,k(n1)} so that xq=x+1q for every machine q. This implies that for each j{0,,m1} and each machine q, there is one job starting at time (m+j)p+xq on machine q. We fix such an  for the rest of the proof and claim that X={ux1,,uxk} is a hitting set of size at most k for I.

Clearly, X has size at most k. Consider some set Aj𝒜 with j{0,,m1}. The only jobs which can start in the interval [(m+j)p,(m+j)p+n] are the dummy jobs DAj and the jobs JAj,ui for uiAj. Because there are only k1 dummy jobs DAj but k machines, this implies that there exists at least one machine q such that job JAj,ui is scheduled at machine q for some uiAj. We know that on machine q one job starts in the interval [(m+j)p,(m+j)p+n] and has starting time (m+j)p+xq. By construction, this job must be JAj,ui with i=xq which implies that ui=uxqX. We can conclude that X is a hitting set for I.

Now we have all the pieces to prove Theorem 3.

Proof of Theorem 3.

4 shows that the described reduction can be computed in polynomial time and produces an instance of Prj,pj=pUj with k machines. Lemmas 5 and 6 show that the described reduction is correct. Since Hitting Set is known to be NP-hard [21] and W[2]-hard when parameterized by k [10], the result follows.

4 New Analysis of Known Algorithm for 𝑷𝒓𝒋,𝒑𝒋=𝒑𝒘𝒋𝑼𝒋

With Theorem 3 we have established that Prj,pj=pUj is NP-hard. Hence, it is natural to resort to parameterized algorithms for efficiently finding exact solutions in restricted cases. To the best of our knowledge, the only known parameterized algorithm for Prj,pj=pwjUj is an XP-algorithm for the number m of machines as a parameter by Baptiste et al. [2, 3].

Theorem 7 ([2, 3]).

Prj,pj=pwjUj can be solved in nO(m) time, where n is the number of jobs and m is the number of machines.

Theorem 7 implies that Prj,pj=pwjUj is in XP when parameterized by the number m of machines. Since Theorem 3 also shows W[2]-hardness for Prj,pj=pUj parameterized by the number m of machines, the algorithm behind Theorem 7 presumably cannot be improved to an FPT-algorithm.

However, as it turns out, we can upper-bound the running time of the algorithm behind Theorem 7 in different ways to obtain additional tractability results. In the remainder of this section, we show that the algorithm developed by Baptiste et al. [2, 3] additionally to Theorem 7 also implies the following.

Theorem 8.

Prj,pj=pwjUj can be solved in pO(m)nO(1) time and in mO(p)nO(1) time, where n is the number of jobs, m is the number of machines, and p is the processing time.

Theorem 8 implies that Prj,pj=pwjUj is in XP when parameterized by the processing time p and that Prj,pj=pwjUj is in FPT when parameterized by the combination of the number m of machines and the processing time p. In order to prove Theorem 8, we present the dynamic programming algorithm for Prj,pj=pwjUj by Baptiste et al. [2, 3]. For the correctness of this algorithm, we refer to their work. We give an alternative running time analysis that shows the claimed running time bounds.

To this end, we need to introduce some additional notation and terminology. Recall that 𝒯 denotes the set of relevant starting time points. The algorithm makes use of Lemma 2, that is, we can assume the starting times of all jobs in an optimal schedule are from 𝒯. A resource profile is a vector x=(x1,x2,,xm) with x1x2xm, xmx1p, and xi𝒯 for all i{1,,m}. Let 𝒳 denote the set of all resource profiles. Now we define the following dynamic program. We assume that the jobs are sorted according to their due dates, that is, d1d2dn.

For two resource profiles a,b𝒳 and some k{1,,n} we define W(k,a,b) to be the maximum weighted number of early jobs of any feasible schedule for the jobs 1,,k such that

  • sorting the starting times of the first jobs on each machine from smallest to largest yields a vector a with aa, and

  • sorting the completion times of the last jobs on each machine from smallest to largest yields a vector b with bb,

where for two vectors a,b of length m we say that ab if and only if for all i{1,,m} we have that aibi.

From this definition, it follows that W(n,(0,,0),(tmax,,tmax)), where tmax is the largest element in 𝒯, is the maximum weighted number of early jobs of any feasible schedule. Baptiste et al. [2, 3] proved the following.

Lemma 9 ([2, 3]).

For all k{1,,n} and all resource profiles a,b𝒳 with ab it holds that W(k,a,b) equals W(k1,a,b) if rk[a1,bmp) and otherwise

max(W(k1,a,b),maxx𝒳,rkx1,x1+pdk,ax,x=(x2,x3,,xm,x1+p)b(W(k1,a,x)+W(k1,x,b)+wk)),

where we define W(0,a,b)=0 for all a,b𝒳 with ab.

A straightforward running time analysis yields the following. We have that |𝒯|O(n2) and hence |𝒳|O(n2m). It follows that the size of the dynamic programming table W is in O(n4m+1) and the time to compute one entry is in O(n2m). This together with Lemma 9 yields Theorem 7. In the remainder of the section, we give an alternative running time analysis to prove Theorem 8.

Proof of Theorem 8.

To prove Theorem 8 we give a different bound for the size of 𝒳. Recall that for all resource profiles x𝒳 we have that xmx1p. It follows that there are |𝒯| possibilities for the value of x1, and then for 2im we have that x1xix1+p. Hence, we get that |𝒳|O(n2pm1). This together with Lemma 9 immediately gives us that Prj,pj=pwjUj can be solved in pO(m)nO(1) time.

Furthermore, we have x1x2xm. Hence, the resource profile x can be characterized by counting how many times a value t𝒯 appears in x. Again, we can exploit that xmx1p. There are |𝒯| possibilities for the value of x1 and given x1, each other entry xi of x with 2im can be characterized by the amount 0yi=xix1p by which it is larger than x1. Clearly, there are p+1 different possible values for the amount yi. It follows that given x1, we can characterize x by counting how often a value 0tp appears as an amount. Hence, we get that |𝒳|O(n2mp+1). This together with Lemma 9 immediately gives us that Prj,pj=pwjUj can be solved in mO(p)nO(1) time.

Lastly, we remark that with similar alternative running time analyses for the dynamic programming algorithm by Baptiste et al. [2, 3], one can show that Prj,pj=pwjUj is in XP when parameterized by the number of release dates or due dates, and that Prj,pj=pwjUj is in FPT when parameterized by the combination of the number of machines and the number of release dates or due dates. However, as we show in the next section, we can obtain fixed-parameter tractability by parameterizing only by the number of release dates or parameterizing only by the number of due dates.

5 FPT-Algorithm for 𝑷𝒓𝒋,𝒑𝒋=𝒑𝒘𝒋𝑼𝒋

In this section, we present a new FPT algorithm for Prj,pj=pwjUj parameterized by the number of release dates or due dates. Formally, we show the following.

Theorem 10.

Prj,pj=pwjUj can be solved in 2O(2r#r#)nO(1) time and in 2O(2d#d#)nO(1) time.

Theorem 10 implies that Prj,pj=pwjUj is in FPT when parameterized by the number r# of different release dates and when parameterized by the number d# of different due dates. We prove the first running time upper-bound of Theorem 10, where we parameterize by the number r# of release dates. By 1, the case for the number d# of deadlines is symmetric. We present a reduction from Prj,pj=pwjUj to Mixed Integer Linear Program (MILP).

Mixed Integer Linear Program (MILP)

Input:

A vector x of n variables, a subset S of the variables which are considered integer variables, a constraint matrix Am×n, and two vectors bm, cn.

Task:

Compute an assignment to the variables (if one exists) such that all integer variables in S are set to integer values, Axb, x0, and cx is maximized.

We give a reduction that produces an MILP instance with a small number of integer values. More precisely, the number of integer values will be upper-bounded by a function of the number of release dates of the Prj,pj=pwjUj instance. This allows us to upper-bound the running time necessary to solve the MILP instance using the following well-known result.

Theorem 11 ([8, 29]).

MILP can be solved in 2O(nintlognint)|I|O(1) time, where nint is the number of integer variables and |I| is the instance size.

Furthermore, we construct the MILP instance in a way that ensures that there always exist optimal solutions where all variables are set to integer values. Informally, we ensure that the constraint matrix for the rational variables is totally unimodular111A matrix is totally unimodular if each of its square submatrices has determinant 0, 1, or 1 [9].. This allows us to use the following result.

Lemma 12 ([6]).

Let Afracm×n2 be totally unimodular. Then for any Aintm×n1, bm, and cn1+n2, the MILP

maxcx subject to (AintAfrac)xb,x0,

where x=(xintxfrac) with the first n1 variables (i.e., xint) being the integer variables, has an optimal solution where all variables are integer.

Before we describe how to construct an MILP instance for a given instance of Prj,pj=pwjUj, we make an important observation on optimal schedules. Intuitively, we show that we can assume that each job is scheduled as early as possible and idle times only happen directly before release dates.

Lemma 13.

Let σ be a feasible schedule for an instance of Prj,pj=pwjUj such that the weighted number of early jobs is W. Then there exists a feasible schedule σ such that

  • the weighted number of early jobs is W,

  • for each job j with σ(j)=(i,t) for some trj, machine i is not idle at time t1, and

  • all starting times of σ are in the set 𝒯 of relevant starting time points.

Proof.

Assume that there is a job j such that σ(j)=(i,t) for some t>rj and machine i is idle at time t1. Assume that job j is the earliest such job, that is, the job with minimum t. Since machine i is idle at time t1 and rj<t, we can create a new schedule σ that is the same as σ except that σ(j)=(i,t1). Clearly, we have that σ is feasible and has the same weighted number of early jobs as σ. By repeating this process, we obtain a feasible schedule σ′′ with the same set of early jobs and such that for each job j with σ′′(j)=(i,t) and machine i is idle at time t1, it holds that t=rj. Furthermore, we have that each starting point in σ′′ is a release date r or a time t with t=r+p for some release date r and some integer . Hence, all starting times of σ′′ are in the set 𝒯 of relevant starting time points. We call a feasible schedule σ release date aligned if the second condition of Lemma 13 holds, i.e., for each job j with σ(j)=(i,t) for some t>rj, the machine i is not idle at time t1. Note that Lemma 13 is stronger than Lemma 2 and implies that there always exists an optimal feasible schedule that is release date aligned.

Given a feasible schedule σ that is release date aligned, we say that a release date rj is active on machine i if job j is scheduled to start at this release date, that is, σ(j)=(i,rj), and machine i is idle at time rj1. Let T be a subset of all release dates, then we say that machine i has type T in σ if T is the set of active release dates on machine i.

Recall that 𝒯={trj and  0n s.t. t=rj+p} denotes the set of relevant starting time points. We say that a starting time t𝒯 is available on a machine with type T if t=r+p for some rT and t+pr, where r is the smallest release date in T that is larger than r.

Given an instance I of Prj,pj=pwjUj, we create an instance I of MILP as follows. For each type T we create an integer variable xT that quantifies how many machines have type T and create the constraint

TxTm. (1)

For each starting time t𝒯 we create a fractional variable xt that quantifies on how many machines the starting time t is available and create the following set of constraints.

t𝒯:xt=TtTxT, (2)

where tT=1 if starting time t is available on a machine with type T and tT=0 otherwise.

For each combination of a job j and a starting time t, we create a fractional variable xj,t if job j can be scheduled to start at time t without violating j’s release date or due date. This variable indicates whether job j is scheduled to start at starting time t and is early. We create the following constraints.

t𝒯:jxj,txt. (3)
j{1,,n}:txj,t1. (4)

Finally, we use the following function as the maximization objective.

j,twjxj,t (5)

This finishes the construction of the MILP instance I. We can observe the following.

Observation 14.

Given an instance I of Prj,pj=pwjUj, the above described MILP instance I can be computed in O(2r#(m+n)2) time and has 2r# integer variables.

In the following, we prove the correctness of the reduction to MILP. We start with showing that if the Prj,pj=pwjUj instance admits a feasible schedule where the weighted number of early jobs is W, then the constructed MILP instance admits a feasible solution that has objective value W.

Lemma 15.

If the Prj,pj=pwjUj instance I admits a feasible schedule where the weighted number of early jobs is W, then the MILP instance I admits a feasible solution that has objective value W.

Proof.

Assume that we are given a feasible schedule σ for I such that the weighted number of early jobs is W. By Lemma 13 we can assume that σ is release date aligned.

We construct a solution for I as follows. Consider job j and let σ(j)=(i,t) such that t+pdj, that is, job j is early. We know that t𝒯. We set xj,t=1 and for all t𝒯 with tt, we set xj,t=0. Note that this guarantees that Constraints (4) are fulfilled. Furthermore, assuming we can set the remaining variables to values such that the remaining constraints are fulfilled, we have that the objective value of the solution to I is W.

In the remainder, we show how to set the remaining variables such that all constraints are fulfilled. Initially, we set all variables xT to zero. Next, we determine the type of each machine. Consider machine i. Then T:={rj s.t. σ(j)=(i,rj)} is the type of machine i. Then we increase xT by one. We do this for every machine. Clearly, afterwards Constraint (1) are fulfilled.

Next, we set xt:=TtTxT, where tT=1 if starting time t is available on a machine with type T and tT=0 otherwise. Clearly, this fulfills Constraints (2). It remains to show that Constraints (3) are fulfilled. So consider some time t𝒯. For each job j starting at time t on some machine i, we increased xT by one for some T with tT=1 when processing machine i. Thus, we have jxj,tTtTxT=xt.

Before we continue with the other direction of the correctness, we prove that we can apply Lemma 12 to show that the MILP instance I admits an optimal solution where all variables are set to integer values.

Lemma 16.

The MILP instance I admits an optimal solution where all variables are set to integer values. Such a solution can be computed in 2O(2r#r#)nO(1) time.

Proof.

Notice that since the Constraints (2) are equality constraints, we have that in any feasible solution to I, all variables xt are set to integer values. Hence, I is equivalent to the MILP I′′ arising from I by declaring xt to be integer variables for every t𝒯 (in addition to the variables xT), and it suffices to show that I′′ has an integer solution.

We show that the constraint matrix for the fractional variables xj,t in I′′ (i.e., the variables xj,t) is totally unimodular. By Lemma 12 this implies that I′′ and therefore also I admits an optimal solution where all variables are set to integer values.

Note that the Constraints (3) partition the set of fractional variables xj,t, that is, each fractional variable is part of exactly one of the Constraints (3). The same holds for the Constraints (4). Furthermore, the coefficients in the constraint matrix for each variable are either 1 (if they are part of a constraint) or 0. Hence, we have that the constraint matrix is a 0-1 matrix with exactly two 1’s in every column. Moreover, in each column, one of the two 1’s appears in a row corresponding to the Constraints (3) and the other 1 is in a row corresponding to the Constraints (4). This is a sufficient condition for the constraint matrix to be totally unimodular [9].

Finally, we argue that an optimal integer solution for I′′ can be computed in the claimed running time upper-bound. We use the algorithm implicitly described by Chaudhary et al. [6] in their proof for Lemma 12. First, we compute an optimal solution for I. By the arguments at the beginning of this proof, this is also an optimal solution for I′′. To transform this optimal solution into an optimal solution where every variable is set to an integer value, we fix all integer variables, resulting in an LP222Linear Program (LP) is the special case of MILP where no variables are considered integer variables (that is, the set S in the definition of MILP is empty). instance I′′′ whose variables are precisely the fractional variables from I′′. Since the constraint matrix of this LP I′′′ is totally unimodular (as argued above), it is well-known that an optimal solution for I′′′ where all variables are set to integer values can be computed in polynomial time, see e.g. Korte and Vygen [24, Theorem 4.18]. Combining this integral optimal solution for I′′′ with the integral variables from I′′ then yields an optimal solution for I′′. Now, with Theorem 11 and 14, we get the claimed overall running time upper-bound.

Now we proceed with showing that if the constructed MILP instance admits an optimal solution that has objective value W, then the original Prj,pj=pwjUj instance admits a feasible schedule where the weighted number of early jobs is W.

Lemma 17.

If the MILP instance I admits an optimal solution where all variables are set to integer values and that has objective value W, then the Prj,pj=pwjUj instance I admits a feasible schedule σ where the weighted number of early jobs is W. The schedule σ can be computed from the optimal solution to I in polynomial time (in the size of I).

Proof.

Assume we are given an optimal solution for I where all variables are set to integer values and that has objective value W. We construct a feasible schedule σ as follows.

First, we assign a type to every machine. Iterate through the types T and, initially, set i=1. If xT>0, then assign type T to machines i to i+xT1. Afterwards, increase i by xT. Since the solution to I fulfills Constraint (1), we know that this procedure does not assign types to more than m machines.

Now iterate through the relevant starting times i𝒯. Let Jt={jxj,t=1} and let Mt={istarting time t is available on machine i}. By Constraints (2) and (3) we know that |Jt||Mt|=xt. Hence, we can create a feasible schedule by setting σ(j)=(i,t) for every job jJt, where iMt and for all j,jJt with jj we have σ(j)=(i,t) and σ(j)=(i,t) with ii. In other words, we schedule each job in Jt to a distinct machine iMt with starting time t. The Constraints (4) ensure that we schedule each job at most once. For all jobs j that are not contained in any set Jt with t𝒯, we schedule j to an arbitrary machine i to a starting time that is later than rj and later than the completion time of the last job scheduled on i. Clearly, we can compute σ in time polynomial in |I|. By the definition of available starting times and the fact that we only create variable xj,t if trj, this schedule is feasible.

It remains to show that the weighted number of early jobs is W. To this end, note that we only create variable xj,t if rjt and t+pdj. Hence, for each job j with xj,t=1 for some t𝒯, we know that this job is early in the constructed schedule σ. It follows that the weighted number of early jobs is j,twjxj,t, which equals the maximization objective of I and hence equals W.

Now we have all the pieces to prove Theorem 10.

Proof of Theorem 10.

Given an instance I of Prj,pj=pwjUj we create an MILP instance I as described above and use Lemma 16 to compute an optimal solution for I where all variables are set to integer values. Lemmas 15 and 17 show that we can correctly compute an optimal schedule for I from the solution to I in polynomial time (in the size of I). 14 together with Lemma 16 show that this algorithm has the claimed running time upper-bound.

6 Conclusion and Future Work

In this work, we resolved open questions by Kravchenko and Werner [25], Sgall [36], and Mnich and van Bevern [32] by showing that Prj,pj=pUj is NP-hard and W[2]-hard when parameterized by the number of machines. The established hardness of the problem motivates investigating it from the viewpoint of exact parameterized or approximation algorithms. In this work, we focussed on the former, leaving the latter for future research. We provided a first step in systematically exploring the parameterized complexity of Prj,pj=pwjUj. Our parameterized hardness result shows that the known XP-algorithm for the number of machines as a parameter is optimal from a classification standpoint. Furthermore, we showed that this known algorithm implies that the problem is also contained in XP when parameterized by the processing time, and that it is contained in FPT when parameterized by the combination of the number of machines and the processing time. Finally, we give an FPT-algorithm for Prj,pj=pwjUj parameterized by the number of release dates (or due dates). We leave several questions open, the most interesting one is the following.

  • Is Prj,pj=pwjUj in FPT or W[1]-hard when parameterized by the processing time p of any job?

Other interesting parameters to consider might be the number of early jobs or the number of tardy jobs. It is easy to see that Prj,pj=pwjUj is in XP when parameterized by either one of those parameters, by some simple guess-and-check algorithm (recall that we can check in polynomial time whether all jobs can be scheduled early [5, 37, 38]). Hence, it remains open whether the problem is in FPT or W[1]-hard with respect to those parameters.

References

  • [1] Esther M. Arkin and Ellen B. Silverberg. Scheduling jobs with fixed start and end times. Discrete Applied Mathematics, 18(1):1–8, 1987. doi:10.1016/0166-218X(87)90037-0.
  • [2] Philippe Baptiste. Scheduling equal-length jobs on identical parallel machines. Discrete Applied Mathematics, 103(1-3):21–32, 2000. doi:10.1016/S0166-218X(99)00238-3.
  • [3] Philippe Baptiste, Peter Brucker, Sigrid Knust, and Vadim G Timkovsky. Ten notes on equal-processing-time scheduling: at the frontiers of solvability in polynomial time. Quarterly Journal of the Belgian, French and Italian Operations Research Societies, 2(2):111–127, 2004.
  • [4] Karl Bringmann, Nick Fischer, Danny Hermelin, Dvir Shabtay, and Philip Wellnitz. Faster minimization of tardy processing time on a single machine. Algorithmica, 84(5):1341–1356, 2022. doi:10.1007/S00453-022-00928-W.
  • [5] Peter Brucker and Svetlana A. Kravchenko. Scheduling jobs with equal processing times and time windows on identical parallel machines. Journal of Scheduling, 11(4):229–237, 2008. doi:10.1007/S10951-008-0063-Y.
  • [6] Juhi Chaudhary, Hendrik Molter, and Meirav Zehavi. Parameterized analysis of bribery in challenge the champ tournaments. In Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI), pages 2704–2712. ijcai.org, 2024. URL: https://www.ijcai.org/proceedings/2024/299.
  • [7] Marek Cygan, Fedor V. Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. doi:10.1007/978-3-319-21275-3.
  • [8] Daniel Dadush, Chris Peikert, and Santosh Vempala. Enumerative lattice algorithms in any norm via M-ellipsoid coverings. In Proceedings of the 52nd IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 580–589. IEEE, 2011. doi:10.1109/FOCS.2011.31.
  • [9] George Bernard Dantzig. Linear inequalities and related systems. Number 38. Princeton University Press, 1956.
  • [10] Rodney G. Downey and Michael R. Fellows. Parameterized Complexity. Springer, 1999. doi:10.1007/978-1-4612-0515-9.
  • [11] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Springer, 2013. doi:10.1007/978-1-4471-5559-1.
  • [12] Nick Fischer and Leo Wennmann. Minimizing tardy processing time on a single machine in near-linear time. In Proceedings of the 51st International Colloquium on Automata, Languages, and Programming (ICALP), volume 297 of LIPIcs, pages 64:1–64:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ICALP.2024.64.
  • [13] Jörg Flum and Martin Grohe. Parameterized Complexity Theory, volume XIV of Texts in Theoretical Computer Science. An EATCS Series. Springer, 2006. doi:10.1007/3-540-29953-X.
  • [14] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, 1979.
  • [15] Ronald L. Graham, Eugene L. Lawler, Jan K. Lenstra, and A. H. G. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling: a survey. In Annals of Discrete Mathematics, volume 5, pages 287–326. Elsevier, 1979.
  • [16] Klaus Heeger and Danny Hermelin. Minimizing the weighted number of tardy jobs is W[1]-hard. In Proceedings of the 32nd Annual European Symposium on Algorithms (ESA), volume 308 of LIPIcs, pages 68:1–68:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.68.
  • [17] Klaus Heeger, Danny Hermelin, George B Mertzios, Hendrik Molter, Rolf Niedermeier, and Dvir Shabtay. Equitable scheduling on a single machine. Journal of Scheduling, 26(2):209–225, 2023. doi:10.1007/S10951-022-00754-6.
  • [18] Danny Hermelin, Yuval Itzhaki, Hendrik Molter, and Dvir Shabtay. On the parameterized complexity of interval scheduling with eligible machine sets. Journal of Computer and System Sciences, page 103533, 2024. doi:10.1016/J.JCSS.2024.103533.
  • [19] Danny Hermelin, Shlomo Karhi, Michael L. Pinedo, and Dvir Shabtay. New algorithms for minimizing the weighted number of tardy jobs on a single machine. Annals of Operations Research, 298(1):271–287, 2021. doi:10.1007/S10479-018-2852-9.
  • [20] Danny Hermelin, Hendrik Molter, and Dvir Shabtay. Minimizing the weighted number of tardy jobs via (max,+)-convolutions. INFORMS Journal on Computing, 2023.
  • [21] Richard M. Karp. Reducibility among combinatorial problems. In Complexity of Computer Computations, pages 85–103. Springer, 1972. doi:10.1007/978-1-4684-2001-2_9.
  • [22] Matthias Kaul, Matthias Mnich, and Hendrik Molter. Single-machine scheduling to minimize the number of tardy jobs with release dates. In Proceedings of the 19th International Symposium on Parameterized and Exact Computation (IPEC), volume 321 of LIPIcs, pages 19:1–19:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.IPEC.2024.19.
  • [23] Kim-Manuel Klein, Adam Polak, and Lars Rohwedder. On minimizing tardy processing time, max-min skewed convolution, and triangular structured ilps. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2947–2960. SIAM, 2023. doi:10.1137/1.9781611977554.CH112.
  • [24] Bernhard Korte and Jens Vygen. Combinatorial Optimization. Springer, 6th edition edition, 2018.
  • [25] Svetlana A Kravchenko and Frank Werner. Parallel machine problems with equal processing times: a survey. Journal of Scheduling, 14:435–444, 2011. doi:10.1007/S10951-011-0231-3.
  • [26] Sven O Krumke, Clemens Thielen, and Stephan Westphal. Interval scheduling on related machines. Computers & Operations Research, 38(12):1836–1844, 2011. doi:10.1016/J.COR.2011.03.001.
  • [27] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
  • [28] Eugene L. Lawler and James M. Moore. A functional equation and its application to resource allocation and sequencing problems. Management Science, 16(1):77–84, 1969.
  • [29] Hendrik W. Lenstra. Integer programming with a fixed number of variables. Mathematics of Operations Research, 8:538–548, 1983. doi:10.1287/MOOR.8.4.538.
  • [30] Jan K. Lenstra, A.H.G. Rinnooy Kan, and Peter Brucker. Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1:343–362, 1977.
  • [31] LL Liu, Chi To Ng, and TC Edwin Cheng. Bicriterion scheduling with equal processing times on a batch processing machine. Computers & Operations Research, 36(1):110–118, 2009. doi:10.1016/J.COR.2007.07.007.
  • [32] Matthias Mnich and René van Bevern. Parameterized complexity of machine scheduling: 15 open problems. Computers & Operations Research, 100:254–261, 2018. doi:10.1016/J.COR.2018.07.020.
  • [33] James M. Moore. An n job, one machine sequencing algorithm for minimizing the number of late jobs. Management Science, 15:102–109, 1968.
  • [34] Michael L. Pinedo. Scheduling: Theory, Algorithms, and Systems, 5th Edition. Springer, 2016.
  • [35] Baruch Schieber and Pranav Sitaraman. Quick minimization of tardy processing time on a single machine. In Proceedings of the 18th International Symposium on Algorithms and Data Structures Symposium (WADS), volume 14079 of Lecture Notes in Computer Science, pages 637–643. Springer, 2023. doi:10.1007/978-3-031-38906-1_42.
  • [36] Jirí Sgall. Open problems in throughput scheduling. In Proceedings of the 20th Annual European Symposium on Algorithms (ESA), volume 7501 of Lecture Notes in Computer Science, pages 2–11. Springer, 2012. doi:10.1007/978-3-642-33090-2_2.
  • [37] Barbara B. Simons. Multiprocessor scheduling of unit-time jobs with arbitrary release times and deadlines. SIAM Journal on Computing, 12(2):294–299, 1983. doi:10.1137/0212018.
  • [38] Barbara B. Simons and Manfred K. Warmuth. A fast algorithm for multiprocessor scheduling of unit-length jobs. SIAM Journal on Computing, 18(4):690–710, 1989. doi:10.1137/0218048.
  • [39] Shao Chin Sung and Milan Vlach. Maximizing weighted number of just-in-time jobs on unrelated parallel machines. Journal of Scheduling, 8(5):453–460, 2005. doi:10.1007/S10951-005-2863-7.