Single machine scheduling with job-dependent machine deterioration

We consider the single machine scheduling problem with job-dependent machine deterioration. In the problem, we are given a single machine with an initial non-negative maintenance level, and a set of jobs each with a non-preemptive processing time and a machine deterioration. Such a machine deterioration quantifies the decrement in the machine maintenance level after processing the job. To avoid machine breakdown, one should guarantee a non-negative maintenance level at any time point; and whenever necessary, a maintenance activity must be allocated for restoring the machine maintenance level. The goal of the problem is to schedule the jobs and the maintenance activities such that the total completion time of jobs is minimized. There are two variants of maintenance activities: in the partial maintenance case each activity can be allocated to increase the machine maintenance level to any level not exceeding the maximum; in the full maintenance case every activity must be allocated to increase the machine maintenance level to the maximum. In a recent work, the problem in the full maintenance case has been proven NP-hard; several special cases of the problem in the partial maintenance case were shown solvable in polynomial time, but the complexity of the general problem is left open. In this paper we first prove that the problem in the partial maintenance case is NP-hard, thus settling the open problem; we then design a $2$-approximation algorithm.


Introduction
In many scheduling problems, processing a job on a machine causes the machine to deteriorate to some extent, and consequently maintenance activities need to be executed in order to restore the machine capacity. Scheduling problems with maintenance activities have been extensively investigated since the work of Lee and Liman [7]. A maintenance activity is normally described by two parameters, the starting time and the duration. If these two parameters are given beforehand, a maintenance activity is referred to as fixed; otherwise it is called flexible. Various scheduling models with fixed maintenance activities, on different machine environments and job characteristics, have been comprehensively surveyed by Schmidt [13], Lee [5], and Ma et al. [9].
A number of researchers initiated the work with flexible maintenance activities. Qi et al. [12] considered a single machine scheduling problem to simultaneously schedule jobs and maintenance activities, with the objective to minimize the total completion time of jobs. They showed that the problem is NP-hard in the strong sense and proposed heuristics and a branch-and-bound exact algorithm. (Qi [11] later analyzed the worst-case performance ratio for one of the heuristics, the shortest processing time first or SPT.) Lee and Chen [6] studied the multiple parallel machines scheduling problem where each machine must be maintained exactly once, with the objective to minimize the total weighted completion time of jobs. They proved the NP-hardness for some special cases and proposed a branch-and-bound exact algorithm based on column generation; the NP-hardness for the general problem is implied. Kubzin and Strusevich [4] considered a two-machine open shop and a two-machine flow shop scheduling problems in which each machine has to be maintained exactly once and the duration of each maintenance depends on its starting time. The objective is to minimize the maximum completion time of all jobs and all maintenance activities. Among others, the authors showed that the open shop problem is polynomial time solvable for quite general functions defining the duration of maintenance in its starting time; they also proved that the flow shop problem is binary NP-hard and presented a fully polynomial time approximation scheme (FPTAS) [4].
Returning to a single machine scheduling problem, Chen [2] studied the periodic maintenance activities of a constant duration not exceeding the available period, with the objective to minimize the maximum completion time of jobs (that is, the makespan). The author presented two mixed integer programs and heuristics and conducted computational experiments to examine their performance. Mosheiov and Sarig [10] considered the problem where the machine needs to be maintained prior to a given deadline, with the objective to minimize the total weighted completion time of jobs. They showed the binary NP-hardness and presented a pseudo-polynomial time dynamic programming algorithm and an efficient heuristic. Luo et al. [8] investigated a similar variant (to [10]) in which the jobs are weighted and the duration of the maintenance is a nondecreasing function of the starting time (which must be prior to a given deadline). Their objective is to minimize the total weighted completion time of jobs; the authors showed the weakly NP-hardness, and for the special case of concave duration function they proposed a (1 + √ 2/2 + )-approximation algorithm. Yang and Yang [16] considered a position-dependent aging effect described by a power function under maintenance activities and variable maintenance duration considerations simultaneously; they examined two models with the objective to minimize the makespan, and for each of them they presented a polynomial time algorithm.
Scheduling on two identical parallel machines with periodic maintenance activities was examined by Sun and Li [14], where the authors presented approximation algorithms with constant performance ratios for minimizing the makespan or minimizing the total completion time of jobs. Xu et al. [15] considered the case where the length of time between two consecutive maintenances is bounded; they presented an approximation algorithm for the multiple parallel machines scheduling problem to minimize the completion time of the last maintenance, and for the single machine scheduling problem to minimize the makespan, respectively.

Problem definition
Considering the machine deterioration in the real world, in a recent work by Bock et al. [1], a new scheduling model subject to job-dependent machine deterioration is introduced. In this model, the single machine must have a non-negative maintenance level (ML) at any time point, specifying its current maintenance state. (A negative maintenance level indicates the machine breakdown, which is prohibited.) We are given a set of jobs J = {J i , i = 1, 2, . . . , n}, where each job J i = (p i , δ i ) is specified by its non-preemptive processing time p i and machine deterioration δ i . The machine deterioration δ i quantifies the decrement in the machine maintenance level after processing the job J i . (That is, if before processing the job J i the maintenance level is ML, then afterwards the maintenance level reduces to ML −δ isuggesting that ML has to be at least δ i in order for the machine to process the job J i .) Clearly, to process all the jobs, maintenance activities (MAs) need to be allocated inside a schedule to restore the maintenance level, preventing machine breakdown. Given that the machine can have a maximum maintenance level of ML max , and assuming a unit maintenance speed, an MA of a duration D would increase the maintenance level by min{D, ML max − ML}, where ML is the maintenance level before the MA.
With an initial machine maintenance level ML 0 , 0 ≤ ML 0 ≤ ML max , the goal of the problem is to schedule the jobs and necessary MAs such that all jobs can be processed without machine breakdown, and that the total completion time of jobs is minimized.
There are two variants of the problem depending on whether or not one has the freedom to choose the duration of an MA: in the partial maintenance case, the duration of each MA can be anywhere in between 0 and (ML max − ML), where ML is the maintenance level before the MA; in the full maintenance case, however, the duration of every MA must be exactly (ML max − ML), consequently increasing the maintenance level to the maximum value ML max . Let C i denote the completion time of the job J i , for i = 1, 2, . . . , n. In the three field notation, the two problems discussed in this paper are denoted as (1|p MA | i C i ) and (1|f MA | i C i ), respectively, where p MA and f MA refer to the partial and the full maintenance, respectively.

Prior work and our contribution
Bock et al. [1] proved that (1|f MA | i C i ) is NP-hard, even when p i = p for all i or when p i = δ i for all i, both by a reduction from the Partition problem [3]; while all the jobs have the same deterioration, i.e. δ i = δ for all i, the problem can be solved in O(n log n) time. For the partial maintenance case, Bock et al. [1] showed that the SPT rule gives an optimal schedule for (1|p MA | i C i ) when p i < p j implies p i + δ i ≤ p j + δ j for each pair of i and j (which includes the special cases where p i = p for all i, or δ i = δ for all i, or p i = δ i for all i). The complexity of the general problem (1|p MA | i C i ) was left as an open problem. Also, to the best of our knowledge, no approximation algorithms have been designed for either problem.
Our main contribution in this paper is to settle the NP-hardness of the general problem (1|p MA | i C i ). Such an NP-hardness might appear a bit surprising at the first glance since one has so much freedom in choosing the starting time and the duration of each MA. Our reduction is from the Partition problem too, using a kind of job swapping argument. This reduction is presented in Section 3, following some preliminary properties we observe for the problem in Section 2. In Section 4, we propose a 2-approximation algorithm for (1|p MA | i C i ). We conclude the paper in Section 5 with some discussion on the (in-)approximability.
Lastly, we would like to point out that when the objective is to minimize the makespan C max , i.e. the maximum completion time of jobs, (1|p MA |C max ) can be trivially solved in O(n) time and (1|f MA |C max ) is NP-hard but admits an O n 2 (ML max ) 2 log ( n i=1 (p i + δ i )) time algorithm based on dynamic programming (and thus admits an FPTAS) [1].

Preliminaries
Given a feasible schedule π to the problem (1|p MA | i C i ), which specifies the start processing time for each job and the starting time and the duration of each MA, we abuse slightly π to also denote the permutation of the job indices (1, 2, . . . , n) in which the jobs are processed in order: π = (π 1 , π 2 , . . . , π n ). The following lemma is proved in [1].
Lemma 1 essentially states that each MA should be pushed later in the schedule as much as possible until absolutely necessary, and its duration should be minimized just for processing the succeeding job. In the sequel, we limit our discussion on the feasible schedules satisfying these two properties. We define the separation job in such a schedule π as the first job that requires an MA (of a positive duration).

Lemma 2.
Suppose J π k is the separation job in an optimal schedule π to (1|p MA | i C i ). Then, the jobs before the separation job J π k are scheduled in the SPT order; the jobs after the separation job J π k are scheduled in the shortest sum-of-processing-timeand-deterioration first (SSF) order; the jobs adjacent to the separation job J π k satisfy δ πi is the remaining maintenance level before the first MA.
Proof. Starting with an optimal schedule satisfying the properties stated in Lemma 1, one may apply a simple job swapping procedure if the job order is violated either in the prefix or in the suffix of job order separated by the separation job J π k . This procedure would decrease the value of the objective, contradicting to the optimality. That is, we have (see Figure 1 for an illustration) Figure 1 An illustration of the optimal schedule π stated in Lemma 2, where the separation job is Jπ k ; the width of a framebox does not necessarily equal the processing time of a job or the duration of an MA.
δ πi denote the remaining maintenance level before the first MA. Because δ < δ π k , an (the first) MA of duration δ π k − δ needs to be performed for processing the separation job J π k . From the optimality of π, swapping the two jobs J π k and J π k+1 should not decrease the objective, that is, otherwise.
Similarly, swapping the two jobs J π k−1 and J π k should not decrease the objective, that is, These together give This proves the lemma.
From Lemma 2, one sees that the separation job in an optimal schedule is unique, in the sense that it cannot always be "appended" to either the prefix SPT order or the suffix SSF order. This is reflected in our NP-completeness reduction in Section 3, where we force a certain scenario to happen.

NP-hardness of the problem
Our reduction is from the classic NP-complete problem Partition [3], formally defined as follows: Partition: We abuse X to denote the instance of Partition with the set X = {x 1 , x 2 , . . . , x n } and and Q 0 is the total completion time of jobs for an initial infeasible schedule π 0 (see Figure 2): The initial infeasible schedule π 0 for the instance I with the separation job J2n+2; π 0 satisfies all properties stated in Lemma 2. All MAs are indicated by their respective durations (for the first MA, its duration is δ2n+2 − δ = 2B).
The job order in this initial schedule π 0 is (J 0 , J 1 , . . . , J n , J 2n+2 , J 2n+1 , J 2n , . . . , J n+1 ), and the first MA precedes the job J 2n+2 , which is regarded as the separation job (see Figure 2). Before the separation job J 2n+2 , the machine maintenance level is allowed to go into negative, but has to be restored to zero just for processing J 2n+2 ; afterwards, machine breakdown is no longer tolerated. From ML 0 = n i=0 δ i − 2B, we know that π 0 is infeasible due to machine breakdown before the first MA; we will convert it to a feasible schedule later. The Query of the decision version of the problem (1|p MA | i C i ) is whether or not there exists a feasible schedule π such that the total completion time of jobs is no more than Despite the infeasibility, the initial schedule π 0 has all the properties stated in Lemma 2, with the separation job J 2n+2 at the center position. The first (n + 1) jobs are in the SPT order and the last (n + 1) jobs are in the SSF order; since δ = −2B, p n = p 2n+1 = 2B, In the rest of the section, we will show that there is a subset X 1 ⊂ X of sum exactly B if and only if the initial schedule π 0 can be converted into a feasible schedule π with the total completion time of jobs no more than Q = Q 0 + B, through a repeated job swapping procedure.
Notice that the two jobs J i and J n+1+i are identical, for i = 0, 1, . . . , n. In any schedule with the job J 2n+2 at the center position, if exactly one of J i and J n+1+i is scheduled before J 2n+2 , then we always say J i is scheduled before J 2n+2 while J n+1+i is scheduled after J 2n+2 . Also, when the two jobs J i and J n+1+i are both scheduled before J 2n+2 , then J n+1+i precedes J i ; when the two jobs J i and J n+1+i are both scheduled after J 2n+2 , then J i precedes J n+1+i .

Proof of "only if"
In this subsection, we show that if there is a subset X 1 ⊂ X of sum exactly B, then the initial infeasible schedule π 0 can be converted into a feasible schedule π with the total completion time no more than Q = Q 0 + B. We also demonstrate the repeated job swapping procedure leading to this successful schedule π.
Suppose the indices of the elements in the subset X 1 are {i 1 , i 2 , . . . , i m }, satisfying 1 ≤ i 1 < i 2 < . . . < i m ≤ n. Starting with the initial schedule π 0 , we sequentially swap the job J i −1 with the job J n+1+i , for = 1, 2, . . . , m. Let π denote the schedule after the -th job swapping. the -th job swapping decreases the total machine deterioration before the separation job J 2n+2 by 2x i ; the -th job swapping increases the total completion time by x i .

Proof.
Recall that the two jobs J i and J n+1+i are identical. Before the -th job swapping between J i −1 and J n+1+i (in the schedule π −1 ), the jobs in between J i −1 and J n+1+i are After the swapping (in the schedule π ) this sub-schedule becomes By a simple induction, all jobs before J n+1+i have their processing times less than p i , and thus the jobs before the separation job J 2n+2 are in the SPT order; for a similar reason, the jobs after the separation job J 2n+2 are in the SSF order.
By the -th job swapping, the change in the total machine deterioration before the separation job J 2n+2 is δ Therefore the duration of the first MA also decreases by 2x i . Since J n always directly precedes J 2n+2 and p n < p 2n+2 , the first half of Eq. (3) holds; since p 2n+2 + δ 2n+2 is the smallest among all jobs, the second half of Eq. (3) holds. That is, the schedule π satisfies all properties in Lemma 2.
For ease of presentation, let C i denote the completion time of the job J i in the schedule π , and let C i denote the completion time of the job J i in the schedule π −1 . Comparing to the schedule π −1 ( ≥ 1), after the -th job swapping between J i −1 and J n+1+i , the completion time of jobs preceding J n+1+i is unchanged; the completion time of each job in between J i and J n (inclusive, n − i + 1 of them) increases by x i ; the duration of the first MA decreases by 2x i ; the completion time of each job in between J 2n+2 and J n+1+i +1 (inclusive, n − i + 1 of them) decreases by x i ; from the last item, the completion time of jobs succeeding J i −1 is unchanged. In summary, there are (n − i + 2) jobs of which the completion time increases by x i and (n − i + 1) jobs of which the completion time decreases by x i . Therefore, the -th job swapping between J i −1 and J n+1+i increases the total completion time by x i . This finishes the proof.

Theorem 4.
If there is a subset X 1 ⊂ X of sum exactly B, then there is a feasible schedule π to the instance I with the total completion time no more than Q = Q 0 + B.
Proof. Let the indices of the elements in the subset X 1 be {i 1 , i 2 , . . . , i m }, such that 1 ≤ i 1 < i 2 < . . . < i m ≤ n. Starting with the initial schedule π 0 , we sequentially swap the job J i −1 with the job J n+1+i , for = 1, 2, . . . , m. Let π denote the schedule after the -th job swapping, and let Q denote the total completion time of jobs in π .
From Lemma 3 we know that the ending schedule π m satisfies all the properties in Lemma 2. Also, the total machine deterioration before the separation job J 2n+2 in π m is n i=0 suggesting that π m is a feasible schedule. (The first MA has zero duration and thus becomes unnecessary.) Moreover, the total completion time of jobs in π m is Q m = Q 0 + m =1 x i = Q 0 + B. Therefore, the schedule π m obtained from the initial schedule π 0 through the repeated job swapping procedure is a desired one.

Proof of "if"
In this subsection, we show that if there is a feasible schedule π to the constructed instance I with the total completion time no more than Q = Q 0 + B, then there is a subset X 1 ⊂ X of sum exactly B. Assume without loss of generality that the schedule π satisfies the properties in Lemma 2. We start with some structure properties which the schedule π must have.

Lemma 5.
Excluding the job J 2n+2 , there are at least n and at most (n + 1) jobs scheduled before the first MA in the schedule π.
Proof. Recall that in Eq. (4) we set M to be a large value such that M > (4n + 8)B. Using M > (4n + 6)B, it follows from M − 4B = δ n < δ n−1 < . . . < δ 1 < δ 0 = M that the initial machine maintenance level We thus conclude that at least n jobs, excluding J 2n+2 which has 0 deterioration, can be processed before the first MA.
Nevertheless, if there were more than (n + 1) jobs scheduled before the first MA, excluding J 2n+2 , then their total machine deterioration would be greater than (n + 2)(M − 4B). Using Lemma 6. There are at most (n + 1) jobs scheduled after the job J 2n+2 in the schedule π.
Proof. We prove the lemma by contradiction. Firstly, noting that the job J 2n+2 has a much larger processing time compared to any other job (M − 2B versus 2B), we conclude that the earliest possible position for J 2n+2 in the schedule π is right before the first MA. We disallow a zero-duration MA and thus the job J 2n+2 can never be the separation job in π due to δ 2n+2 = 0.
If J 2n+2 is scheduled after the separation job, by Eq. (2) or the SSF rule, for every job J i scheduled after J 2n+2 we have p 2n+2 ≤ p i + δ i . If J 2n+2 is scheduled right before the first MA, by Eq. (3), for the separation job J i we have p 2n+2 ≤ p i + (δ i − δ); by Eqs. (2) and (3), for every other job J i scheduled after J 2n+2 we have p 2n+2 ≤ p i + δ i . Therefore, the completion time of a job scheduled positions after the job J 2n+2 is at least ( + 1) × p 2n+2 . If there were (n + 2) jobs scheduled after J 2n+2 , then the total completion time of the last (n + 3) jobs would be at least However, using p j ≤ 2B for j = 2n + 2, one sees that Eq. (5) can be simplified as Using M > (3n + 6)B, that is, we would have the total completion time of the last (n + 3) jobs in π strictly greater than Q = Q 0 + B, contradicting to our assumption.
Combining Lemmas 5 and 6, we have the following lemma regarding the position of J 2n+2 in the schedule π. Lemma 7. In the schedule π, the position of the job J 2n+2 has three possibilities: Case 1: There are (n+1) jobs before the first MA, π n+2 = 2n+2, and J πn+3 is the separation job. Case 2: There are (n + 1) jobs before the first MA, J πn+2 is the separation job, and π n+3 = 2n + 2. Case 3: There are n jobs before the first MA, J πn+1 is the separation job, and π n+2 = 2n + 2.
Proof. Note that the processing time of the job J 2n+2 is strictly greater than that of any other job, while the sum of its processing time and machine deterioration (p 2n+2 + δ 2n+2 ) achieves the minimum. Because J 2n+2 cannot act as the separation job due to δ 2n+2 = 0, by Lemma 2 it can only be either the last job scheduled before the first MA or the first job scheduled after the separation job (through a possible job swapping, if necessary). Using Lemmas 5 and 6, it is easy to distinguish the three possible cases stated in the lemma.
Recall that the job order in the initial infeasible schedule π 0 is (J 0 , J 1 , . . . , J n , J 2n+2 , J 2n+1 , J 2n , . . . , J n+2 , J n+1 ), and the first MA is executed before processing the job J 2n+2 , which is regarded as the separation job (see Figure 2). In the sequel, we will again convert π 0 into our target schedule π through a repeated job swapping procedure. During such a procedure, the job J 2n+2 is kept at the center position, and a job swapping always involves a job before J 2n+2 and a job after J 2n+2 .
In Cases 1 and 3 of the schedule π, the job J 2n+2 is at the center position (recall that there are in total 2n + 3 jobs), and therefore the target schedule is well set. In Case 2, J 2n+2 is at position n + 3, not the center position; we first exchange J 2n+2 and J πn+2 to obtain a schedule π , which becomes our target schedule. That is, we will first convert π 0 into π through a repeated job swapping procedure, and at the end exchange J 2n+2 back to the position n + 3 to obtain the final schedule π. In summary, our primary goal is to convert the schedule π 0 through a repeated job swapping procedure, keeping the job J 2n+2 at the center position and keeping the first MA right before the job J 2n+2 (to be detailed next). At the end, to obtain the target schedule π, in Case 1, we swap the job J 2n+2 and the first MA (i.e., moving the first MA one position backward); in Case 2, we swap J 2n+2 and the immediate succeeding MA and the following job (with the MA merged with the first MA); in Case 3, we swap the first MA and its immediate preceding job (i.e., moving the first MA one position forward).
In the target schedule (π in Cases 1 and 3, or π in Case 2), let R = {r 1 , r 2 , . . . , r m } denote the subset of indices such that both J rj and J n+1+rj are among the first (n + 1) jobs, where 0 ≤ r 1 < r 2 < . . . < r m ≤ n, and L = { 1 , 2 , . . . , m } denote the subset of indices such that both J j and J n+1+ j are among the last (n + 1) jobs, where 0 ≤ 1 < 2 < . . . < m ≤ n. Note that J 2n+2 is at the center position in the target schedule, and thus it has to be |R| = |L| and we let m = |R|. Clearly, all these j 's and r j 's are distinct from each other.
In the repeated job swapping procedure leading the initial infeasible schedule π 0 to the target feasible schedule, the j-th job swapping is to swap the two jobs J j and J n+1+rj . The resultant schedule after the j-th job swapping is denoted as π j , for j = 1, 2, . . . , m. In Section 3.1, the job swapping is "regular" in the sense that j = r j − 1 for all j, but now j and r j do not necessarily relate to each other. We remark that immediately after the swapping, a job sorting is needed to restore the SPT order for the prefix and the SSF order for the suffix (see the last paragraph before Section 3.1 for possible re-indexing the jobs).
The following Lemma 8 on the j-th job swapping, when j < r j , is an extension of Lemma 3.

Lemma 8.
For each 1 ≤ j ≤ m, if the schedule π j−1 satisfies the first two properties in Lemma 2 and j < r j , then the schedule π j satisfies the first two properties in Lemma 2; the j-th job swapping decreases the total machine deterioration before the center job J 2n+2 by δ j − δ rj = 2 rj k= j +1 x k ; the j-th job swapping increases the total completion time by at least rj k= j +1 x k ; and the increment equals rj k= j +1 x k if and only if j > r j−1 .
Proof. Note that 0 ≤ r 1 < r 2 < . . . < r m ≤ n, 0 ≤ 1 < 2 < . . . < m ≤ n, and all these j 's and r j 's are distinct from each other. Since j < r j , we assume without loss of generality that r j −1 < j < r j for some j ≤ j, that is, the (j − j ) jobs J n+1+r j , J n+1+r j +1 , . . . , J n+1+rj−1 have been moved to be in between J j and the center job J 2n+2 in the schedule π j−1 .
The j-th job swapping between the two jobs J j and J n+1+rj clearly decreases the total machine deterioration before the center job J 2n+2 by δ j − δ rj = 2 rj k= j +1 x k . To estimate the total completion time, we decompose the j-th job swapping between the two jobs J j and J n+1+rj as a sequence of (r j − j ) "regular" job swappings, between the two jobs J k and J n+1+k+1 for k = r j − 1, r j − 2, . . . , j + 1, j . We remark that the order of these regular job swappings is important, which guarantees that at the time of such a swapping, the job J k is before the center job J 2n+2 and the job J n+1+k+1 is after the center job J 2n+2 (see the last paragraph before Section 3.1 for possible re-indexing the jobs). For each such regular job swapping between the two jobs J k and J n+1+k+1 , we can apply (almost, see below) Lemma 3 to conclude that it increases the total completion time by at least x k+1 .
From the proof of Lemma 3, the increment in the total completion time equals x k+1 if and only if there are exactly (n − k + 1) jobs in between J n+1+k+1 and J n (inclusive), that is, the (j − j ) jobs J n+1+r j , J n+1+r j +1 , . . . , J n+1+rj−1 should not be moved in between J k and the center job J 2n+2 in the schedule π j−1 . Therefore, the j-th job swapping increases the total completion time by at least rj k= j +1 x k ; and the increment equals rj k= j +1 x k if and only if j > r j−1 (i.e., j = j). This proves the lemma.

Lemma 9.
For each 1 ≤ j ≤ m, if the schedule π j−1 satisfies the first two properties in Lemma 2 and j > r j , then the schedule π j satisfies the first two properties in Lemma 2; the j-th job swapping increases the total machine deterioration before the center job J 2n+2 by δ rj − δ j = 2 j k=rj +1 x k ; the j-th job swapping increases the total completion time by at least j k=rj +1 x k .
Proof. We prove first an analog to Lemma 3 on a regular job swapping between the two jobs J i +1 and J n+1+i , which can be viewed as an inverse operation of the regular job swapping between the two jobs J i and J n+1+i +1 .
For ease of presentation, let C i denote the completion time of the job J i in the schedule after the regular job swapping, and let C i denote the completion time of the job J i in the schedule before the regular job swapping. Comparing to the schedule before the swapping, the completion time of jobs preceding J n+1+i is unchanged; the completion time of each job in between J i +2 and J n (inclusive, n − i − 1 of them) decreases by x i +1 ; the duration of the first MA increases by 2x i +1 ; the completion time of each job in between J 2n+2 and J n+1+i +1 (inclusive, n − i + 1 of them) increases by x i +1 ; consequently, the completion time of jobs succeeding J i +1 is unchanged. The total completion time of jobs in the schedule after this regular job swapping increases by at least x i +1 . Note that the increment equals x i +1 if and only if there are exactly (n − i + 1) jobs in between J 2n+2 and J n+1+i +1 (inclusive), that is, the (j − 1) jobs J 1 , J 2 , . . . , J j−1 should not be moved in between the center job J 2n+2 and J n+1+i in the schedule π j−1 .
Using the above analog of Lemma 3, the rest of the proof of the lemma is similar to the proof of Lemma 8 by decomposing the j-th job swapping between the two jobs J j and J n+1+rj as a sequence of ( j − r j ) "regular" job swappings, between the two jobs J k+1 and J n+1+k for k = j − 1, j − 2, . . . , r j + 1, r j .

Theorem 10.
If there is a feasible schedule π to the instance I with the total completion time no more than Q = Q 0 + B, then there is a subset X 1 ⊂ X of sum exactly B.
Proof. We start with a feasible schedule π, which has the first two properties stated in Lemma 2 and for which the total completion time is no more than Q = Q 0 + B. Excluding the job J 2n+2 , using the first n + 1 jobs and the last n + 1 job in π, we determine the two subsets of indices R = {r 1 , r 2 , . . . , r m } and L = { 1 , 2 , . . . , m }, and define the corresponding m job swappings. We then repeatedly apply the job swapping to convert the initial infeasible schedule π 0 into π.
In Case 1, the total machine deterioration of the first (n + 1) jobs in π is where δ ≥ 0 is the remaining machine maintenance level before the first MA.
On the other hand, the total completion time of jobs in the schedule π is at least It follows that 1) δ = 0; 2) there is no pair of swapping jobs J j and J n+1+rj such that j > r j ; and 3) 1 < r 1 < 2 < r 2 < . . . < m < r m (from the third item of Lemma 8). Therefore, from Eq. (6), for the subset X 1 = ∪ m j=1 {x j +1 , x j +2 , . . . , x rj }, x∈X1 x = B. That is, the instance X of the Partition problem is a yes-instance.
In Case 2, after all the m job swappings, the first MA immediately precedes J 2n+2 and has its duration −δ since δ 2n+2 = 0, where δ ≥ 0 is the remaining machine maintenance level before the first MA. J 2n+2 and its immediate succeeding MA and the following job need to be swapped to obtain the schedule π; the thus moved MA is merged to the first MA, resulting in a positive duration. The total machine deterioration of the first (n + 1) jobs in π (before the first MA) is implying that Eq. (6) still holds in this case.
On the other hand, the total completion time of jobs in the schedule π is at least The similarly as in Case 1, it follows that 1) δ = 0; 2) there is no pair of swapping jobs J j and J n+1+rj such that j > r j ; and 3) 1 < r 1 < 2 < r 2 < . . . < m < r m . Therefore, from Eq. (6), for the subset X 1 = ∪ m j=1 {x j +1 , x j +2 , . . . , x rj }, x∈X1 x = B. That is, the instance X of the Partition problem is a yes-instance.
In Case 3, after all the m job swappings, the first MA immediately precedes J 2n+2 and has its duration −δ since δ 2n+2 = 0, where δ ≤ 0 is the remaining machine maintenance level before the first MA. Therefore, J πn+1 and the first MA need to be swapped to obtain the schedule π. The total machine deterioration of the first (n + 1) jobs in π (before the first MA) is implying that Eq. (6) still holds in this case, except that here δ ≤ 0.
On the other hand, the total completion time of jobs in the schedule π is at least The similarly as in Case 1, except that here δ ≤ 0, it follows that 1) δ = 0; 2) there is no pair of swapping jobs J j and J n+1+rj such that j > r j ; and 3) 1 < r 1 < 2 < r 2 < . . . < m < r m . Therefore, from Eq. (6), for the subset X 1 = ∪ m j=1 {x j +1 , x j +2 , . . . , x rj }, x∈X1 x = B. That is, the instance X of the Partition problem is a yes-instance.
The following theorem follows immediately from Theorems 4 and 10.
Theorem 11. The general problem (1|p MA | j C j ) is NP-hard.
Suppose k < k * , then for each i such that k ≤ i < k * , we have Therefore, we have C i ≤ C * i for each i = n, n − 1, . . . , k. It follows that On the other hand, by the SPT order, the algorithm A 1 achieves the minimum total completion time of jobs of {J 1 , J 2 , . . . , J k−1 }. One clearly sees that in the optimal schedule π * , the sub-total completion time of {J 1 , J 2 , . . . , J k−1 } is upper-bounded by OPT. Therefore, Merging Eqs. (8) and (9), we conclude that the total completion time of schedule π is This proves the performance ratio of 2 (which can also be shown tight on a trivial 2-job instance I = {J 1 = (1, λ), J 2 = (λ − 1, 1), ML 0 = ML max = λ}, with a sufficiently large λ). The running time of the algorithm A 1 is dominated by two times of sorting, each taking O(n log n) time.

Concluding remarks
We investigated the single machine scheduling with job-dependent machine deterioration, recently introduced by Bock et al. [1], with the objective to minimize the total completion time of jobs. In the partial maintenance case, we proved the NP-hardness for the general problem, thus addressing the open problem left in the previous work. From the approximation perspective, we designed a 2-approximation, for which the ratio 2 is tight on a trivial two-job instance. The 2-approximation algorithm is simple, but it is the first such work. Our major contribution is the non-trivial NP-hardness proof, which might appear surprising at the first glance since one has so much freedom in choosing the starting time and the duration of the maintenance activities. It would be interesting to further study the (in-)approximability for the problem. It would also be interesting to study the problem in the full maintenance case, which was shown NP-hard, from the approximation algorithm perspective. Approximating the problem in the full maintenance case seems more challenging, where we need to deal with multiple bin-packing sub-problems, while the inter-relationship among them is much complex.