Document 11070251

advertisement
HD28
.M414
no
ALFRED
P.
WORKING PAPER
SLOAN SCHOOL OF MANAGEMENT
On-line Maintenance of Optimal Schedules for a
Sin^e Machine
Amril
Aman
Anantaram Balakrishnan
Vijaya Chandru
SSM#3327-91-MSA
August 1991
MASSACHUSETTS
INSTITUTE OF TECHNOLOGY
50 MEMORIAL DRIVE
CAMBRIDGE, MASSACHUSETTS 02139
On-Line Maintenance of Optimal Schedules for a
Single Machine
Amril
Aman
Anantaram Balakrishnan
Vijaya Chandru
SSM#3327-91-MSA
August 1991
On-Line Maintenance of Optimal Schedules
for a Single Machine
Amril Aman*
School of Industrial Engineering
Purdue University
West
Lafayette,
IN 47907
Anantaram Balakrishnan
Sloan School of
M.
I.
Cambridge,
Vijaya
'
Management
T.
MA 02139
Chandru*
School of Industrial Engineering
Purdue University
West Ufayette, IN 47907
August 1991
Supported in part by the NSF Engineering Research Center for
Systems, Purdue University.
Supported
in part
by
M.I.T.'s Leaders for
Intelligent
Manufacturing
Manufacturing program.
Supported in part by ONR Grant N00014-86-K-0689 and by NSF-ERC
Manufacturing Systems at Purdue University
Intelligent
Abstract
Effective
and
efficient
scheduling in a dynamically changing environment
is
important for real-time control of manufacturing, computer, and
telecommunication systems.
This paper illustrates the algorithmic and
analytical issues associated with developing efficient
update schedules on-line.
We
and
effective
methods
to
consider the problem of dynannically
scheduling precedence-constrained jobs on a single processor to minimize
the
maximum
completion time penalty.
We
technique to reoptimize a rolling schedule
first
develop an
when new
efficient
jobs arrive.
The
effectiveness of reoptimizing the current schedule as a long-term on-line
strategy
is
measured by bounding
its
performance relative
to oracles that
perfect information about future job arrivals.
Keywords: Scheduling, design and analysis of algorithms,
heuristics
have
1.
Introduction
Planning and scheduling dynamic systems with random job arrivals,
failures,
and preemption
a very challenging task. Typically, since future
is
events cannot be forecast with enough detail and accuracy, planners often use
on-line scheduling strategies. Consider, for instance, a production system
with random job
when
On-line methods apply
arrivals.
regarding a job's processing requirement
is
detailed information
revealed only at
its
release time.
Thus, each release time represents an epoch at which the existing schedule
revised to reflect the
new
One
information.
is
on-line scheduling strategy
consists of reoptimizing the current "rolling" schedule at each job arrival
that only uses information
epoch using a deterministic scheduling algorithm
about the current system and workload
status.
We refer to
this
scheduling
strategy as On-line reo ptimization. This strategy of reacting to changes in
system status
(job arrival or completion, processor failure, etc.)
by
reoptimizing and updating the current schedule raises two issues.
First,
method
given an optimal schedule of n tasks, can
to revise this schedule, say,
when
a
new
we
devise an efficient
task enters the system?
Intuitively, since the existing n-job schedule contains useful information,
exploiting this information to adjust the schedule
compared
to reconstructing the
refer to the latter
method
is
likely to
be more
efficient
optimal (n+l)-job schedule from scratch.
as a zero-base algorithm, while a
exploits current schedule information
is
method
We
that
an updating algorithm. Computer
scientists
have emphasized
methods
in the context of certain geometric
this issue of relative efficiency of
updating
and graph problems by
developing specialized data structures and updating algorithms
(see, for
example, Spira and Pan [1975], Chin and Houck [1978], Even and Shiloach
[1981],
Overmars and van Leeuwen
and Frederickson
scheduling
[1985]).
(see, for
In contrast, the research
example,
base algorithms. In
[1981], Frederickson
Graham
this paper,
we
and
Sriixivas [1984],
on deterministic resource
et al. [1979]) focuses primarily
on zero-
illustrate the algorithmic issues in
dynamic
reoptimization by developing an efficient updating method for one class of
single-machine scheduling problems.
-1
updating methods are
Efficient schedule
time planning and control.
scheme
esp)ecially
important for
real-
Consider, for instance, the following "bidding"
for assigning tasks in a distributed processing
system
(see, for
example, Ramamritham and Stankovic [1984], Zhao and Ramamritham
Malone
[1985],
due dates
et al. [1988]).
arrive
randomly
maintains and updates
system
(or
when
its
Jobs with varying processing requirements
Each processor
at different processor locations.
own
a processor
local schedule.
fails),
the source
When
node
a
and
new
job enters the
(or a central coordinator)
queries the other processors to determine their expected completion time
before deciding where to dispatch the job.
target processor
and determine
awarded
must adjust
its
its
To formulate
its
response, each
new
current schedule to accomodate the
tentative completion time.
Subsequently,
to a processor, the selected processor
when
must again update
job
the job
its
is
schedule.
Given the possibly large volume of job announcements and reassignments,
devising efficient updating algorithms to accomodate
critical for this
new
jobs
is
dearly
type of real-time control mechanism.
In addition to updating efficiency,
we
are also interested in the
effectiveness of the on-line reoptimization strategy. In particular, what
relative
performance
(i.e.,
is
the
closeness to optimality) of schedules obtained
through on-line reoptimization compared to an "optimal"
off-line decision
procedure that has perfect infonnation about the future? Recently, computer
scientists
have developed a standardized approach
characteristic of on-line methods.
to study this
The approach seeks a worst-case measure
called competitiveness to evaluate solution effectiveness.
mode
performance
We
illustrate this
of analyzing effectiveness using our single machine scheduling
example.
Studying efficiency and effectiveness issues for on-line reoptimization
required a judicious choice of the scheduling context. In particular, both the
structure
and performance of on-line updating methods depend strongly on
the scheduling objective. Consider, for instance, the problem of scheduling a
single
machine
to
minimize
identical release times.
for this
for
n
The
problem (Jackson
jobs, the
maximum
earliest
[1955]).
tardiness for unrelated jobs with
due-date rule finds the optimal schedule
Given an
earliest
due-date (EDD) schedule
updating problem consists of constructing a
-2
new EDD
schedule
(by adjusting the current schedule) to accomodate a
number
new job.
n denotes the
If
of currently scheduled jobs, re-sorting the (n+1) jobs to construct the
new EDD
schedule requires 0(n log n) operations. However, updating can be
performed much more
efficiently
if
we
use a heap structure
(see, for
example,
Tarjan [1983], Aho, Hopcroft and Ullman [1974]) to store the current schedule.
Updating the heap when a new job arrives merely involves inserting the new
item and rebalancing the heap, which requires 0(log n)
scheduling objectives, the updating method
computational improvement
hard scheduling problems
may
(e.g.,
is
not so obvious, and the n-fold
not be possible. And, of course, for
minimizing
sum
In this paper
we
NP-
of completion times for jobs
with arbitrary release dates) the updating problem
polynomially solvable
For other
effort.
is
not likely to be
either.
develop and analyze the worst-case performance of an
on-line updating method for a single machine scheduling problem with
precedence-constrained jobs, where the objective consists of minimizing the
maximum
completion time penalty over
proposed by Graham
et al. [1979],
we
all jobs.
In the classification
scheme
consider the Vpredij^j^ problem.
Lawler [1973] proposed an O(n^) zero-base algorithm to construct an optimal
n-job schedule for this problem. Subsequently, Baker et
this
al.
[1982] generalized
known release dates,
problem, we focus on a
algorithm to the case where jobs have arbitrary but
and preemption
permitted. For the l/prec/f^^^^
is
special class of penalty functions that satisfy a consistency property defined in
Section
Several penalty functions such as linear completion time and
2.
tardiness penalties satisfy this property. For this class of scheduling problems.
Section 3
first
describes a
new
zero-base algorithm called the
Forward
algorithm (unlike Lawler's algorithm, this method schedules jobs from front
to back)
the
with
number
0(m
+ n log n) worst-case time-complexity, where
of arcs in the precedence graph. Subsequently,
we
m denotes
develop an
updating version of the Forward algorithm that uses information about the
cvurent n-job optimal schedule to optimally add a
n' (<
n) ancestors,
m' (< m)
arcs, the
algorithm
is
new
job.
If
the
new
job has
and the precedence subgraph induced by these ancestors has
computational complexity of the Forward updating
0(m' +
n').
Results of computer simulations reported in Section
4 confirm that, in practice, the Forward updating procedure requires
significantly lower computational time than applying the zero-base algorithm
-3-
to construct the (n+l)-job optimal schedule.
Section 5 analyzes the
competitiveness of on-line reoptimization for selected penalty structures.
We show
method
that the
2-competitive
is
(i.e., its
ratio relative to the optimal, perfect information
by a
worst-case performance
schedule
bounded above
is
factor of 2) for a delivery-time version of the scheduling
when
(without preemption), and
the penalty function
is
problem
subadditive (with
preemption).
This paper makes several specific contributions for the 1/prec/fj^^^
problem with consistent penalty functions. In
new FORWARD
FORWARD
algorithm;
algorithm;
particular, we:
(i)
propose a
develop an updating (on-line) version of the
(ii)
empirically demonstrate the computational
(iii)
benefits of using updating algorithms (instead of zero-base algorithms) to
perform schedule adjustments; and,
line reoptimization
is
to
some
work
analyze the competitiveness of on-
However, our broader purpose
an example to motivate the need for
special cases.
use the 1/prec/f,,^^ problem as
further
in the general area of efficient
methods, and to
2.
for
(iv)
and
updating
effective schedule
illustrate the issues that arise in
developing such methods.
Problem description and notation
The 1/prec/fj^^ problem
consists of scheduling
machine, subject to precedence constraints on the
processing time required for job
j.
The
the jobs; the graph contains a directed arc
indexed from
1
to n,
with
<
i
j
if
of arcs in the precedence graph.
stored as a linked
each of
its
list
G
job
(i,j)
whose nodes correspond
from node
i
precedes job
We assume that
0(m)
j.
Job
contair\s a directed
ancestors of job
j.
i
is
Let
Each job
j
node
j
if
job
i
is
i
m denote the number
is
from every job
of all immediate
storage, with pointers
set
said to be an ancestor of job
path from
to
the precedence graph
•
j.
i
to
For convenience, assume that jobs are
immediate predecessors. Let B denote the
predecessors of job
graph
requiring
j.
Let pj denote the
job precedence constraints are
specified via a directed, acyclic precedence graph
an immediate predecessor of job
jobs.
n jobs on a single
to
j.
Let
A 2
•
B:
carries a penalty function
j
if
to
the precedence
denote the set of
all
fAtJ that depends on
its
completion time
j
=
t:.
The scheduling
objective
minimize fmax =
to
is
Max
[iXtd
:
l,2,...n}.
Lawler [1973] developed the following O(n^) zero-base algorithm to
solve the l/prec/f^^^ problem. The method iteratively builds an optimal
sequence by scheduling jobs in reverse order,
processed
(n-1),
jobs,
T,^
=
and so
and
E
on.
let Tj^
{pj
any, have
:
At stage
Min
{
fj(Tjj)
€
j*
:
denote the set of k currently unscheduled
j
e
Rj^
with
Rj^
}.
all
jobs in Q^,
we
minimum
k, the
algorithm assigns to position k the
penalty at time
The procedure terminates
=
Tj^, i.e., fj»(Tjj)
at the
penalty, the overall complexity of the algorithm
While Lawler's algorithm applies
on a
if
refer to jobs in Rj^ as the set of
end of stage
each step requires 0(n) operations to identify the eligible job with
will focus
i.e.,
be the subset of jobs in Qj^ whose successors,
During stage
eligible jobs at stage k.
eligible job
k, let Qj^
let Rj.
be
of the schedule), then the job in position
been already scheduled;
all
identifies the job to
denote the cumulative processing time for
e Q^}. Also,
j
n
position
last (i.e., in
i.e., it first
is
0(n
1.
Since
minimum
).
to arbitrary penalty functions,
we
special class of penalty ftmctions that satisfy the following
consistency condition:
A
set of functions
f|(.), f2(.),
every pair of indices
for
all
As Figure
i, j
e
,
{1, 2,
f^C.) is
1(a)
n}, either fj(t)
...,
values of completion time
said to be consistent
f:(t)
or
fj(t)
for
>
fj(t)
t.
shows, consistent functions do not
two penalty functions
<
if,
intersect;
Figure 1(b) shows
that are not consistent.
Several natural penalty functions satisfy the consistency property.
Examples, shown in Figure
2,
completion time criteria
= W:
t
-
dj,
max
where
{0, t
-
d; is the
d:}.
fj(t)
due date
include
t
or
(i)
fj(t)
for job
j,
= W:
and
t^
(iii)
Jobs with consistent penalty fvmctions have the
rarUdng (say, increasing order of penalties) for
Hence,
we
will
and quadratic)
(ii) the lateness penalty Ut) =
the tardiness penalty iXt) =
the weighted (linear
all
same
relative
completion time values.
sometimes omit the completion time argument, and denote
-5-
as
>
fj
fj
the fact that job
has a higher penalty than job
i
j
(for all
completion
time values).
we
For convenience,
different jobs are distinct,
maximum
f)enalty will
assume
either fj >
will
i.e.,
that the penalty functions for the
fj
or
<
fj
fj.
Thus, the job with the
always be unique. Our Forward algorithm requires
only the relative order of jobs with respect to the penalty functions rather
than the exact penalty values for different completion times. Hence, ranking
the jobs in order of penalties
3.
is sufficient.
The Forward Algorithm
for Vpiedfj^^^
problem with Consistent
Penalty Functions
This section
FORWARD
first
describes a
new
zero-base procedure called the
algorithm to find the optimal n-job schedule for the 1/prec/fmax
problem with consistent penalty functions. Unlike Lawler's algorithm, the
new method
positions
how
3.1
it
schedules jobs from front to back
first).
facilitates
The
We
prove the correctness of
updating the schedule
(i.e., it
assigns jobs to earlier
this algorithm,
when
a
new job
and demonstrate
enters the system.
FORWARD Zero-Base Algorithm
The Forward algorithm
argument. Recall
is
motivated by the following intuitive
that, for consistent
penalty functions, the relative ordering
of jobs (in terms of their penalties) does not vary with time. Hence,
not constrained by precedence restrictions,
we
can minimize
f j^^^
if
jobs are
by
scheduling the jobs in decreasing order of penalty. However, this decreasingpenalty order
may
violate
some precedence
constraints.
precedence constraints, consider the following
(and parsimoniously) advance
jobs:
'natural'
To
satisfy the
scheme
to selectively
with the decreasing-p>enalty order as
Start
the candidate sequence; examine jobs from front to back in this sequence,
ensure precedence
feasibility for
immediately before job
j)
each job
j
by advancing
every ancestor that
is
(i.e.,
and
scheduling
currently scheduled after job
This procedure effectively attempts to deviate as
little
as possible
j.
from the
decreasing-penalty order by advancing orUy the essential low-penalty jobs that
-6-
must precede the high-penalty
forms the basis
for the
As we demonstrate
jobs.
next, this principle
Forward algorithm, and gives the optimal n-job
schedule.
To
we
describe and prove the validity of the Forward algorithm,
some additional notation. For any subset of jobs
immediate predecessors of job
Similarly, Aj(S)
= A^
Forward algorithm
indexed jobs such that
Proposition
belonging to subset
j
n S denotes
i
<
job
if
j
S,
i
j
=
set of all
n S.
B:
in subset S.
The
we have
Recall that
result.
precedes job
be the
Bj(S)
i.e.,
the set of ancestors of job
on the following
relies
S, let Bj(S)
use
j.
1:
For any subset of jobs
S, let j*
be the job in
maximum
with the
this subset
penalty. Then, subset S has an optimal schedule, denoted as n(S), that assigns
job
j*
to position
{
I
A^»(S)
1+1), and
ancestors to the
all its
positions in increasing order of job indices (where
I
A
I
first
I
Aj»(S)
I
denotes the number of
elements of the set A).
Proof:
The
first
part of the proposition states that the subset S
optimal schedule that processes the maximum-penalty job
possible,
i.e.,
this
immediately by
schedule
We
j*.
processes
first
prove
all
Aj»(S)
Let job
j*
be scheduled
be a non-ancestor that
denote the position of job
j'
is
in position
the current completion time of job
k'+l to k
must be
by one
position.
j'
<
does not
I
Aj»(S)
followed
and
let
By our choice of
Consider
to position k,
+1,
k.
of
would be
j'
I
satisfy this
but before, job
j*.
j*'s
penalty in the set
j*.
j*,
to,
must be ancestors
maximum
obtained by postponing job
k >
in schedule R", k'
ancestor for any of these jobs; otherwise,
is
11" that
scheduled closest
jobs in positions (k'+l) to (k-1)
Finally, since job j* has the
ancestors of job
as soon as
using an interchange argument.
this result
Consider an alternative optimal schedule
property.
j*
must have an
now
Also, job
job
j*.
j'
j'
«
Let
k'
k', all
is
not an
ancestor as well.
S, fj»(t)
the
and advancing
>
new
all
fj.(t),
where
t
schedule
jobs in positions
By our previous observations, the new schedule
feasible; furthermore, since job j*
-7-
has a higher penalty than job
j',
the
.
new
schedule does not increase the
repeating this process until
j*,
we
maximum
non-ancestors of job
all
By
penalty of the schedule.
j*
are postponed
beyond
get an optimal schedule that satisfies the condition of the proposition.
Now,
job
j*
has the
penalty incurred for
j*
maximum
penalty
among
must exceed the penalty
these are completed earlier
and have lower
their relative order in positions 1 to
I
Aj»(S)
all
each of
for
Thus, the
jobs in S.
ancestors (since
its
f)enalty functions), regardless of
I
.
To be
feasible,
however, the
assignment of these ancestors must satisfy the precedence constraints. Since
numbered
jobs are
in order of their precedence, scheduling the ancestors in
increasing index order gives a feasible schedule.
Proposition
identify the job
first,
(
I
Aj,(S)
Let
suggests the following iterative scheduling procedure:
1
c
S'
+1),
I
j*
with
and assign
maximum
penalty, schedule
predecessors to positions
all its
S denote the remaining
set of jobs
problem
and so
(
I
A.»(S)
I
+2) to
I
S
for the subset of jobs
on.
I
.
S',
In effect,
describe the algorithm next.
we
can consider a
In this description,
iteration
The
Step
and
S^
is
1
I
Aj»(S)
I
In
j*).
to this
r is
scheduling
new
We
subset,
formally
the iteration counter,
is filled
l^ is
in the r*^
the set of remaining unscheduled jobs at the beginning of
r.
FORWARD Zero-Base Algorithm
0:
Set
Initialization
r
*-
=
/j.
iteration counter
1;
Sj <-
unscheduled jobs
{1,2,.. .,n);
set of
0.
last position
at iteration r
scheduled in previous iteration
.J
Step
new
procedure.
this iterative
the pointer to the last position in the schedule that
iteration,
through
must be scheduled optimally
and apply Proposition
Our method implements
in position
(which are not ancestors of
the optimal schedule n(S), these remaining jobs
in positions
1
it
1:
(a)
Iterative step
Find the job
j*(r)
with
maximum
penalty in set
8-
S^;
Aj»(j)(Sr) := Set of all ancestors of j*{r) in S^.
(b) Set
/r
<-
Assign
(c)
to position
j*(r)
- 1
/^
Sj^i
<r-
S^- Aj.(,)(Sr)
- {j*(r))
empty. Stop. The current schedule
is
Else, set r f- r+1,
and return
Observe that the
iterative step is
to the job
As
through
set A:.(r)(Sr) to positions (/r-i+D
in increasing order of job indices.
(d) Set S,^i
(e) If
+1;
IAj»(r)(Sr)l
Assign jobs of the
/j.
Job.
+
/r.i
j*(r)
with the
r increases,
maximum
is
optimal;
at
most n
to Step 1(a).
performed
We refer
times.
penalty at the r'^ step as the
r'"
Bottleneck
the corresponding bottleneck jobs have successively lower
penalties.
3.1.1
Example
To
illustrate the
Forward algorithm, cor^ider the precedence graph and
the relative ordering of 6 jobs (in decreasing order of penalties)
Figure
penalty
and
Initially, all jobs
3.
job 5
is
2, i.e.,
positions
1
(i.e., j*(l)
A5(S^) =
and
iteration, job 6
=
{1,2}.
first iteration
largest
3, is
among
all
/^
=
3.
At the second
remaining
assigned to position
4,
jobs.
while job 6
Its
is
In the final iteration, the orUy remaining job (job 4)
last position.
Data Structures
To perform
the
Table
1
scheduled
is
summarizes these computations.
and Computational Complexity
Forward algorithm's computations
maintain a special data structure, and
Our implementation
1
schedules the ancestors in
in position
has the largest penalty
scheduled in the
3.1.2
The
and the job with the
in
This job has two unscheduled ancestors, jobs
5).
and schedules job 5
2,
unscheduled ancestor, job
in position 5.
are unscheduled,
shown
first sorts
make some minor
efficiently,
we
algorithnuc changes.
the jobs in decreasing order of penalties prior
main algorithm. This sorting operation requires 0(n log n)
to initiating the
effort,
and
will facilitate the process of identifying the bottleneck job at
each
main procedure. Also, our implementation does not
iteration of the
determine the exact positions for the unscheduled ancestors in the set
Aj»(y)(Sr) (i.e.,
we
Instead,
schedule
it
does not perform step
end
of the
Initially, all jobs
during that
(/^-D)
each
after at
main algorithm by performing an
assign the temporary index
iteration,
have a temporary index of
(ru-
and job
must be scheduled
jobs that
immediately
iteration.
reindex these jobs tempwrarily, and determine the actual final
at the
op)eration.
1(c))
we
j)
to
j*(r) is
in the
have temporary indices
algorithm terminates,
+
each job
0.
At
iteration
iteration
€ A|,(^)(Sp that
j
assigned the index (nr +
r*
overall sorting
is
r,
we
scheduled
j*(r)).
Thus,
all
(between positions (/r.i+D and
in the range (nr+1) to n(r+l).
After the
sort the jobs in increasing order of their
main
temporary
indices (0(n log n) effort) to obtain the final optimal schedule. Observe that
the largest possible value of a temporary index
intermediate iterations,
to identify since they
Let us
algorithm.
now
all
have temporary indices of
the sorted
j*(r),
we
If
back
all
all
j*(r) is
0.)
the
first
Now,
effort in
job following j*(r-l) in
consider the effort required
j*(r) (in
in the linked
list
step 1(a)). Starting with
representation of the precedence
encounter a previously scheduled
job,
we need
not explore
its
be scheduled previously. Thus, the
required to identify the members of the set A|»(r)(Sr) is 0(m) over
iterations (since
once).
which requires 0(n)
unscheduled ancestors using the pointers to the
ancestors since these ancestors must
total effort
of jobs,
unscheduled ancestors of job
trace
we
list
the r*^ bottleneck job
immediate predecessors
graph.
successive bottleneck jobs involves
with a temporary index of
list
to identify the
job
r,
0.
analyze the computational complexity of the Forward
First, identifying the
(At step
n(n+l). Also, at
the jobs that have not yet been scheduled are easy
sequentially scanning the sorted
total.
is
we examine
Combined with
overall complexity of
precedence graph
is
each edge in the precedence graph exactly
the initial
0(m
all
and
+ n log
n).
O(n^); hence, the
final sorting operations,
In general, the
Forward algorithm
Lawler's original algorithm in the worst-case.
sparse precedence graphs,
we
number
However,
is
no
for
we
get an
of arcs
m in the
better than
problems with
expect the Forward algorithm to perform better.
-10
The Forward algorithm also extends to the more ger\eral
1/prec, Tj, pmtn/fj^^ problem where jobs have different release times
are
known
and preemption
in advance,
Later (in Section
this extension.
5),
we
enhanced method as the benchmark
reoptimization
when
App)endix
permitted.
is
that
describes
1
use the schedule generated by this
to evaluate the effectiveness of
job release times are not
known
on-Une
in advance.
Adapting Lawler's algorithm for consistent penalty functions
3.13
Note
that
when
time. Recall that, at each stage k, for
the eligible job
j
e
we can also adapt
run in 0(m + n log n)
the jjenalty functions are consistent,
Lawler's original O(n^) algorithm for 1/prec/f,^^^ to
time
r:
R^ with
k =
n, n-1,
...,
Lawler's algorithm selects
1,
the smallest penalty at the current completion
For general penalty functions, the order of eligible jobs (arranged in
Tj^.
increasing order of penalty values at
However,
Tj^)
might change from stage
to stage.
To
writh consistent penalty functions, the order is invariant.
exploit this property
we
use a heap structure to store the currently eligible jobs
in sorted order (increasing penalties) at each stage.
the job that
heap, and
is
(iii)
currently at the root of the heap,
insert in the
heap
all its
(ii)
At stage
k,
we:
(i)
schedule
delete this job from the
immediate predecessors that
just
became
eligible.
0(n log
n) total effort; and, checking the eligibility of jobs at each stage
requires
effort (since
and removing each job from the heap
entails
each arc of the precedence graph must be examined
Hence, the overall complexity of the heap implementation of Lawler's
once).
static
0(m)
Inserting
algorithm
is
0(m
+ n log
n),
which
is
the
same
complexity of our Forward algorithm. However, as
Forward algorithm
is
more amenable
to
as the computational
we show
next, the
updating existing schedules.
3^ The FORWARD Updating Algorithm
For the updating problem,
a
new
job,
arrives.
We
the
are given an optimal n-job schedule, and
indexed as (n+1), with prespedfied immediate predecessors B^^|,
initially
assume
that job (n+1) does not
the current set of n jobs; later,
when
we
new
we
in
how to apply the updating method
among currently scheduled jobs. Let n
indicate
job also has successors
have any successors
11
=
{j|, J2, J3,
•—
,
Jn)
denote the current optimal n-job schedule where
the index of the job that
is
scheduled in the
The updating
position.
k'*^
W=
problem consists of constructing a new optimal schedule
J
n+l^ that includes the
new
denotes
jj^
[i\, j'2/
']\,
... ,
job (n+1).
The updating procedure uses information on
corresponding to the current schedule. As
the bottleneck jobs
we mentioned
in Section 3.1, the
successive bottleneck jobs must have successively lower penalties; hence, the
current sequence
penalties.
n
already
lists
the bottleneck jobs in order of decreasing
now the position for job (n+1) in the (n+l)-job optimal
assuming we applied the Forward zero-base algorithm. Since job
Consider
schedule IT,
(n+1) does not precede any current jobs,
determined solely by
jobs.
In particular,
its
U»(r.\)
< i^+l ^ V(r)'
for the (r-1)*'
r in linear
merely insert job (n+1) and
time.
all its
in
bottleneck job
Assuming
is
^^^
job's penalty lies
Since the current
jobs.
decreasing penalty order,
we
And, the updating procedure must
previously unscheduled ancestors
ancestors that are not scheduled in positions
1)*'
^'^' *^^
and r* bottleneck
sequence schedules existing bottleneck jobs
can identify the index
position in the schedule
p>enalty function relative to the current bottleneck
suppose
between the penalties
its
1
to /^.j)
immediately
(i.e.,
after the (r-
j*(r-l).
that
we
are initially given only the immediate predecessors
B^^-f of job (n+1), finding the
members
of
A^^|(Sp
ancestors for job (n+1)) requires O(m') operations,
(the set of
where m'
edges in the precedence subgraph induced by job (n+1) and
must then assign these ancestors
unscheduled
the
is
its
number
of
We
ancestors.
to consecutive positions, starting with
position (/^.|+1). Observe that the current schedule satisfies the precedence
constraints
among
all
ancestors of job (n+1). Hence, the jobs in Aj^^^(Sr) need
not be re-sorted to satisfy precedence constraints; instead,
we get
a feasible
schedule by merely scheduling these ancestors in the order in which they
occur in the current schedule. Adjusting the current schedule in
requires at most O(n') effort,
where
n' is
the
number
n) for the
manner
of ancestors of job (n+1).
Thus, the overall complexity of the Forward updating procedure
compared with 0(m + n log
this
is
0(m'+n')
Forward zero-base algorithm.
Finally,
note that the updating procedure can easily accommodate
-12
new
jobs that
must
precede existing
jobs.
Let
jg
be the immediate successor of job (n+1) that
scheduled earliest in the current schedule, and
current schedule. In the
in positions
to
ig
new
and beyond
3.2.1
its
position in the
Therefore,
will retain their relative order.
to jobs that are currently
we need
scheduled in
1 to /g-1.
Example
For the example shown in Figxire
when
calculations
a
new
of jobs
1
and
two
5,
and
its
penalty value Ues between those
Job 7 must, therefore, be scheduled between the two
6.
consecutive bottleneck jobs 5 and
are not scheduled prior to job
5,
Figure 4 illustrates the updating
3,
job (job 7) enters the system. This job has
immediate predecessors, jobs 4 and
4 and
denote
schedule, the jobs that are currently scheduled
apply the updating procedure only
positions
let l^
is
6.
Job 7 has two ancestors, jobs 3 and
Hence,
5.
we
assign these
two
4, that
jobs to positions
respectively (preserving their relative order in the current schedule);
job 7 occupies position 6, followed
by job
6.
Figure 4 shows the updated
schedule.
Computational results
4.
Section 3
showed
that the updating algorithm has better worst-case
complexity than the zero-base algorithm. To verify
superiority in practice,
we compared
this
computational
the computation times using the
Forward zero-base algorithm and the updating procedure
of
random
test
describe the
from 100 jobs
an extensive
to 400 jobs.
set
We first
before presenting the
results.
Random Problem Generation
Our random problem
the
in size
random problem generating procedure
computational
4.1
problems ranging
for
number
Initially,
of jobs (n),
we
contained
and the density 5 of the precedence graph
(0
< 5 <
1).
attempted to generate precedence graphs with random topologies
by independently
probability
generator requires two user-spedfied parameters:
S.
selecting arcs
We discovered,
many redundant
(i,j),
for
any pair of nodes
i
and
j,
with
however, that the resulting precedence graphs
arcs.
For example,
-13-
if
the graph contains arcs
(i,j),
and
(j,k),
then arc
(i,k),
preceding k)
(i,k)
can be deleted since
implied by the other two
is
arcs.
this
precedence order
(i.e., i
Consequently, the reduced
graph (with redundant arcs eliminated) was often much sparser than the
desired density values.
redundancies,
arcs
we
To overcome
this
problem and
to
avoid checking for
decided to use layered precedence graphs that contain only
between successive
To generate
density parameter
none of the
layers; hence,
arcs are redundant.
random layered graph containing n nodes and with
a
5,
the problem generator:
•
randomly
•
equally divides the
•
for each pair of
selects the
probability
number
number
nodes
i
and
of layers (L) in the graph (0 <
of nodes
j
among
new
layer,
probability
We
in successive layers, selects arc
we
300,
and
and
0.90.
(i,j)
with
5.
and connects node (n+1)
to
new
job (n+1) to
nodes in the previous layer with
5.
implemented the zero-base and updating versions of the Forward
algorithm in
tests,
n);
the layers; and,
For the updating problem, the random generator assigns the
a
L<
PASCAL on
a
Sun 4/390 workstation. For our computational
considered four networks sizes, with number of nodes n = 100, 200,
400,
instances.
and
five values of the density
For each combination
(n,5),
Table 2 summarizes the
standard deviation of
CPU
we
mean
parameter 5 =
0.10, 0.25, 0.50, 0.75,
generated 100 random problem
(over 100
random
instances)
and
times (to add the (n+1)*' job to the current
schedule) for the zero-base and updating versions of the Forward algorithm
for all the (n,5) combinations.
As Table 2 shows,
faster than the zero-base version
by a
the updating algorithm
factor ranging
from 2
to 5.
is
Thus, for
scheduling contexts that require numerous frequent updates, the magnitude
of computational savings using the updating
the 100-job problems, the
CPU
method can be
For
time for individual problem instances varies
widely as indicated by the large standard deviation
the problem size increases, the
substantial.
CPU
(relative to the
mean). As
time for updating relative to 2^ro-base
scheduling appears to increase. The density parameter does not seem to have
a significant or consistent effect
on
this ratio.
14-
5.
Competitiveness of On-line Reoptimization
Having demonstrated
the relative efficiency of using tailored updating
methods instead of zero-base algorithms
to
accomodate new
examine the effectiveness of on-line reoptimization
schedule dynamic systems.
assume a
One approach
tractable stochastic
model
jobs,
we now
as a heuristic strategy to
to evaluate this effectiveness is to
for job arrivals, processing times,
and
precedence relationships, and analyze the expected performance of on-line
reoptimization (or other dispatch rules) in this framework.
mode
of analysis
is
However,
often sensitive to the choice of the stochastic
this
model
governing the occurrence of random events.
Recently several researchers in theoretical computer science
Borodin
et al.
[1987],
Chung
et al. [1989],
Manasse
(e.g.,
have developed
et al. [1988])
an alternate approach to study on-line effectiveness using the notion of
competitiveness. The approach involves characterizing the worst-case
performance of the on-line method compared
to
an optimal
off-line
procedure that has perfect information about the future. In particular, for
problems with a minimization objective, an on-line algorithm
c-competitive
if
A
is
said to be
the inequality
Ca
^
c
Co + a
holds for any instance of the on-line problem. Here,
incurred by the on-line algorithm A, and Cg
solution with clairvoyance.
is
C^ denotes
the "cost"
the cost for the optimal off-line
Thus, competitiveness
is
a useful
measure
for
performance analysis of incremental algorithms. Researchers have started
applying this measure to scheduling problems only recently; Shmoys, Wein
and Williamson
[1991] address competitiveness issues related to on-line
scheduling of parallel machines to minimize makespan.
This section demonstrates the underlying principles and techniques of
competitiveness analysis applied to our single machine scheduling problem.
Developing bounds on the relative difference between the on-line and
line
we
optimal objective values for general penalty functions
f:() is
off-
difficult since
cannot exploit any special properties of the solutions. We, therefore, need
to separately
study various specializations of
-15-
f:0-
We consider two types of
p>enalty functions - subadditive functions
penalty functions,
when we
we
For subadditive
lateness.
shov^ that on-line reoptimization
perniit preemptions,
scheduling, where p
and
is
and (p+2)-competitive
the Aspect ratio (defined
2-competitive
is
for
non-preemptive
We
later).
then prove 2-
competitiveness of on-line reoptimization for the lateness penalty case
(delivery time version).
Before describing and proving the competitiveness results,
the context
randomly
and the mechanics of on-line reoptimization. Jobs
at various release
order in which they arrive.
times
We
Assume
t-.
any additional setup or reprocessing
order) of the
new
(i)
First,
consider the case
arrival
epoch
j
and
ancestors immediately after the
Notice that the current job u
all
is
and resumed
is
later
without
decreasing pjenalty
r* (in
necessarily be completed
Hence, job
first,
j
is
(ii)
inserting job
(r*-l)*'
j
and
its
unscheduled
bottleneck job in the current schedule.
preempted only
if
job
has a higher penalty
j
In the non-preemptive case, job
other available jobs.
rescheduled.
the job that
relative to all the currently available jobs (including
the current in-process job u),
than
r^,
when
In this case, applying the updating
effort.
determining the rank, say,
job
arrive
that jobs are indexed in the
currently in process, say, job u can be interrupted
involves:
us clarify
study effectiveness for both preemptive and
non-preemptive scheduling problems.
preemptions are permitted, i.e., at each
method
let
u must
and only the remaining jobs can be
ranked only relative to these remaining
This on-line updating algorithm
is
a heuristic
method
jobs.
that does not
guarantee long-run optimality of the schedules. The benchmark for
comparing the performance of the on-line method is an optimal off-line
schedule that has prior knowledge (at time 0) about the exact arrival times n
for
all
jobs
j.
In
Graham
et al.'s [1979]
nomenclature, the off-line schedule
the optimal solution to either the 1/prec,
rj,
pmtn/f^^
problem depending on whether or not preemption
the 1/prec/fj^^^ problem (with preemption)
is
is
is
or 1/prec, rji^^^
permitted. Note that
polynomially solvable using,
enhanced Forward algorithm described in Appendix 1, while the
1/prec, T:/ fjj^^ problem (without preemption) is known to be NP-hard.
say, the
Indeed, the non-preemptive problem remains NP-hard even
16-
if
we
restrict the
penalty f^^^^ to
maximum
relax the precedence constraints
l/n/Lj^^ problem).
(i.e.,
for the
5.1
Subadditive Penalty functions
A
and
lateness L^^^,
penalty function
f :() is
said to be subadditive
if it
satisfies the
condition:
fj(ti
for all
t|, t2
^
0.
+
<
tj)
+
fj(ti)
fj(t2)
Interesting special cases of subadditive functions include
concave penalty functions, and linear completion time pjenalties
Pj
We
t).
denote the
maximum
subadditive penalty as f^ax-
studies the competitiveness of on-line reoptimLzation, with
preemption,
When
when
all
jobs
2-competitive,
is
most twice the optimal
^^^
=
section
and without
have consistent, subadditive penalty functions.
preemptions are permitted,
strategy
(i.e., f:(t)
i.e.,
we show
that the on-line reoptimization
the objective value of the on-line schedule
off-line value.
When
worst-case ratio increases to (p+2), where p
is
preemption
is
is
at
prohibited, the
a specified ratio of job processing
times.
5.1.1
Scheduling with Preemptions
We now show
that, for
any k-job problem, the preemptive schedule
obtained using on-line reoptimization (without anticipating future job
arrivals) has a worst-case
line
performance
ratio of 2 relative to the
optimal
off-
schedule for the l/r^ prec, pmtn/f^^^ problem constructed by the
enhanced Forward algorithm (Appendix
Theorem
1).
1:
For the 1/prec, r^ pmtn/f^^^ problem, on-line reoptimization
competitive.
-17-
is 2-
Proof:
Let
n
be the preemptive schedule obtained using on-line reoptiniization
k-job problem.
schedule, and
Let
let
t:
is
critical job,
max
=
<
<
j
the completion time for job
j
(t)k
where
denote the
j'
=
be the (maximum) completion time penalty for
4>j^
fj.(tj.)
{fj(tj):
1
n must
schedule
in
j'
k},
schedule n.
in the
job
j'
and
its
j).
is
scheduled
the interval of time
completion. Let
[n.,
denote the set of
J'
later
t:.]
all
than
note
all
jobs
j
Let A(J') denote the set of ancestors
€
j
J'
of
all
either
be a member of the
the completion time
for job
t:.
set
j'
J'
that are completed
J'
(
also includes
jobs in the set J.
or an ancestor of
n
in schedule
the
arrival of
Clearly, every job that the on-line schedule completes in the interval
must
jobs
would be
j'
between the
and having equal or higher penalty functions
in this interval
job
now
Consider
the critical job,
is
j'
we
First,
have lower penalty functions (otherwise,
a job with higher penalty function that
critical job).
this
i.e.,
several characteristics of the schedule n. Since
following job
for a
some
job
j
e
[r:.,
t.]
Also,
J'.
has the following upper
bound:
<
t:.
Now
r:.
I
+
pj
I
+
(1)
Pj.
n* which uses prior information
(maximum) completion time penalty for
consider the optimal off-line schedule
on job release
times. Let
this schedule,
and
let Tj
<\>^^
be the completion time for job
jobs belonging to the set
Observe
be the
let j"
be the job that
fj.(Tj-)
>
J',
i^
>
T:-
>
Z
jej-
Inequality
last in
all
the
n*.
(2)
Pi
+
Z
Pi
jeA(J')
'
because both
(1), (3),
,
and
and
and
^
Inequalities (2)
=
j"
(4),
and
(3)
'
(4)
follows from the definition of
j'
(2)
fj.(Tj-),
equal or higher penalty function than job
(Dk
scheduled
Among
rp
>
Tjn
tj.
in 11*.
that
'
From
is
j
belong to the set
J',
j'.
^, and because
job
Inequalities (3)
and
and job
j" is
completed
j"
€
(4)
J'
has an
hold
last in
n*.
we have
Tj.
(5)
+
Tj-
=
2Tj..
imply that
fj.(tj.)
18
(5)
<
fj.(2Tj..)
from
<,
2
fj.(Tjn)
from subadditivity, and
<
2
(tj;
from
(5)
(2).
(6)
Hence, the on-line reoptimization strategy
Claim
The worst-case bound
:
The following example
precedes job
Jobs
3.
1
of 2 (proved in
f)enalties
and 2 arrive
at
f:(t)
time
Theorem
1) is tight.
Consider a 3-job problem instance
justifies this claim.
with linear completion time
2-competitive.
is
=
P:
0,
each requiring 10 imits of
t.
Job
1 is
unrelated, while job 2
processing time, and job 3 arrives at time 10 with a very small processing
The
time.
i-[>
^2-
jobs have the following ordering in terms of penalty functions:
^ o^^ notation, the following parameters describe this problem
instance: r^ = 0, r2
1,
P2 =
0,
and P3 =
At time
0,
=
0,
and
r3
=
10;
p| = 10, P2 = 10, and P3 =
both jobs
1
job 3 arrives (and job
starts job 2 (to satisfy job 3's
at
time (20 +
e).
On
100*(20+e).
e;
B3 =
(2);
and 2 are
Job 3
is
1
available, but job
1
At
is at
time 10,
this time, the on-line
the other hand, the optimal off-line scheduling
is
and delayed
2-3-1. Job
3,
arrival at
completed
t
=
10.
method
Hence, the
at (10+e), is again the
(20+e)/(10+e) which approaches 2 as e approaches
the worst-case ratio of 2
method
precedence constraint), and finally completes job 3
with a penalty of 100*(10+e). Thus, the ratio of on-line
is
=
the critical job, with a completion time penalty of
optimal off-line sequence
penalty values
p|
has higher penalty.
The next epoch
it first.
completes).
anticipates job 3's higher penalty
critical job,
and
100.
Hence, the on-line method processes
when
>
i^
is tight.
Next,
we show
that, for the
to off-line
0.
Hence,
non-preemptive
case, the worst-case ratio is higher.
5.1.2
Scheduling without preemptions
As we noted
preemption
case,
we do
schedule.
is
earlier, finding the
optimal off-line schedule with no
computationally intractable.
Hence, unlike the preemptive
not have a convenient characterization of the optimal off-line
To evaluate
the competitiveness of on-line reoptimization for the
-19-
we
non-preemptive case,
We know
use the following strategy.
that the
optimal off-line preemptive schedule has a lower penalty than the optimal
off-line
non-preemptive schedule. Hence,
for the on-line,
if
we
can derive a worst-case ratio
non-preemptive schedule with respect
preemptive schedule,
should also hold
this ratio
to the optimal, off-line
for the off-line
preemptive solution. Let p denote the maximum, over
the ratio of processing time for job
and
ancestors,
all its
We refer
expression
max
is
{pj/{pi
sum
a lower
/e
Note
ratio.
bound on
^
+
'
p as the Aspect
to
to the
p with a tighter lower
Theorem
i
i
and
of
j,
of processing times for job
i
p,}
:
its
i,j
<
k,
i ?t j}.
that the
denominator
in the
above
the earliest possible completion time of job
bound (on
and
<
1
Ai
Indeed, our competitiveness result applies even
release times of job
job pairs
i.e.,
=
p
j
all
non-
job
if
i.
replace the denominator of
completion time) involving, say, the
i's
ancestors.
2:
For the 1/prec, rj/f^^^ problem, the on-line reoptimization method
is
(p+2)-competitive.
Proof:
This proof
is
very similar to the proof for Theorem
non-preemptive schedule for a k-job problem. Let
penalty of this schedule, defined by the
time
[r:., t:.]
between the
Let u be the job that
J'
be the
is
set of all jobs
arrival of job
vdth higher
its
jjenalties
completed in
this interval either
Let A(J') be the set of
belong to
all
J'
ancestors
Let
^^ be
j'.
the shcedule's penalty value.
off-line schedules
must
n„ when job
than
j'
j'
arrives,
and
€
J'
.
let
u, all other jobs that are
or are ancestors of one or
j
Ilj^
completed in the
that are
for jobs in
The job completion times
satisfy the following inequalities:
-20-
(maximum)
the
Consider the interval of
n* be the optimal, off-line preemptive schedule; job j" e J'
n* among all jobs of J'. T: denotes the completion time for
is
be the on-line,
Ilj^
completion in schedule
Except for the in-process job
[r:., t.].
J'.
and
currently in process in
time interval
jobs in
j'
critical job
1.
As
J'.
is
more
before, let
scheduled
job
j
in n*,
last in
and (^
in the on-line
and
<
tj.
rj.
'
+
Z
{fj.,
J
>
V(TjO
>
U max
These inequalities imply
fj.(tj.)
<
Un,) +
<
<t>jj+
p
(p+2)
Z
Vp
J
'
Z
+
'
pj
);
I
+
and
(8)
})•
(9)
'
JGj'
p;
(7)
'
Pj
that:
=
J
pj
{r.,
jef
'
jeA(J')
'
(0?
pj+lpj;
)eA(J)
max
>
Tjn
X
p^+
'
<)>k
+
+
Pj.)
'^J
Z
£;.(
J
<t>k
Pj+Zpj)
j6A(j')
'
j€r
'
using subadditivity and
(7)
using subadditivity and
(9)
^.
Thus, the worst-case ratio for the on-line, non-preemptive schedule
is at
most
Hence, the on-
(p+2) relative to the optimal off-line, preemptive schedule.
line reoptimization strategy is at least (p+2)-competitive for the
1/Ty prec, pmtn/f^gj^ problem.
Claim
:
The worst-case bound
To prove
of (p+2) for non-preemptive schedules
augmented version
this claim, consider the following
previous worst-case problem instance (described after Theorem
addition to the 3 jobs in that example,
time r4 =
0,
we have
ratio p for this
for
5,
problem instance
is
at
time
job 4, followed
10.
Since job 3
P4/P1 =
by job 2 and
is
a fourth job that arrives at
2.
Consider,
1 (or
job 4)
first,
is
time
tinally job 3. Job 3
10; the
the schedule
scheduled
e).
completes
at
first
time (40 +
and
processor
is
idle
e); it is
Contrast this on-line schedule
with the following optimal, non-preemptive schedule: Job 2
at
In
not yet available, the method schedules
the critical job with a penalty of 100*(40 +
and completes
1).
1,
obtained using on-line reoptimization. Job
completes
of the
and a processing time of P4 = 20.
some small 5 > 0. Note that the Aspect
with penalty coefficient P4 =
Also, job 3 arrives at r3 = 10 +
is tight.
from time 10
starts at
time
to time (10
+
6);
Job 3 starts at time (10+5), and completes at time (10+&+e), followed by jobs
and
4.
Again, job 3
is
1
the critical job, with a completion time penalty value of
21
Thus, the ratio of the on-line penalty and the optimal, off-line
100*(10+5+e).
(non-preemptive) penalty approaches (p +
2)
=4
as 5
and
e tend to zero.
The Lateness Penalty function
5.2
We now
is fj(t:)
=b-
date for job
dj,
j,
consider the Lateness objective L^^gx'
where
t:
and
d. are, respectively, the
and Lj^^^ = Max
{
fj(tj):
=
j
^•^•'
j
completion time and due
Our
l,2,...,n}.
^^^ penalty for job
discussions focus
on
non-preemptive, precedence-constrained scheduling for this problem. The
best off-line approximation algorithm for the 1/t-/Lj^^ problem was
developed by Hall and Shmoys
[1991].
formulation of this model (described
They show
later), a
that, for the delivery-time
4/3 approximation algorithm
possible using an enhanced version of Jackson's earliest
due date
is
rule.
Although Jackson's rule can be applied on-line, the approximation
Shmoys
techniques used by Hall and
sacrifice the on-line characteristic to
achieve the tighter bounds.
To study
maximum
on-line competitiveness,
lateness objective Lj^g^ since
we
cannot work directly with the
some problem
zero or negative optimal off-line Lj^^^ values.
worst-case competitiveness becomes
that
is
even
slightly suboptimal.)
(If
unbounded
instances
may have
the off-line L^^^^
is
zero, the
any on-line algorithm
for
Instead of redefining competitiveness,
we
transform the Lj^^^ problem to the following equivalent delivery time
version (Potts [1980]) which has a positive optimal value for
all
problem
instances.
5.2.1
The Delivery Time Formulation
Let
largest
We
(dj)
due
denote the job due dates, and
date.
interpret the
Now define
tail q:
The
tail
a of job
j
is
L +
q:,
delivery time over
all jobs.
value greater than the
as
where t
objective of the scheduling problem
maximum
j
K be a
as the time to deliver the job
Thus, the delivery time of job
j.
the
let
now
is
j
after
it is
the completion time of job
consists of
minimizing the
The optimal schedule
-22-
completed.
for this delivery
time version also minimizes maximum lateness. Observe that the penalty
function f(t|) = t + K - d: implied by the delivery time objective function is
consistent (according to our definition in Section
having higher penalty functions. Thus,
earlier
2),
with jobs that are due
each job arrival epoch, on-
at
problem (without preemptions)
line reoptimization for the delivery time
involves adjusting the current schedule to process the available job with the
earliest
process job and
5.2.2
as soon as possible (subject to completing the current in-
due date
unfinished ancestors of the
all
O n-line
2-competitiveness of
EDD job).
Reoptimization
For any given k-job instance of the non-preemptive delivery time
problem,
let
n
be the schedule obtained using on-line reoptimization. Let
DT; be the delivery time of job
time value over
all
jobs
Theorem
the
^^^ is
maximum
delivery
Denote the (maximum) delivery time of the
j.
optimal, off-line schedule
in this schedule;
j
n* sls^.
3:
On-line reoptimization
is
delivery time problem,
2-competitive for the non-preemptive
i.e.,
(t)jj
< 2
(()|^
for all k.
Proof:
We
in
prove
this result
which they
equals the
arrive,
sum
of
by induction. Assume
and
first
consider k =
that jobs are indexed in the order
2.
time, processing time,
its start
value <^ must be greater than or equal to the
the k jobs. In the two-job case,
on-line updating
method
if
1
before job
tail,
maximum
the optimal off-line
processing time over
also constructs the optimal schedule.
2.
When
arrives before job 2, the on-line
the
method
off-line solution consists of
Relative to this optimal schedule, the on-line
1.
schedule delays job 2 by
most
at
1
Suppose the optimal
processing job 2 before job
p|.
Hence, job 2 incurs an incremental
delivery time penalty of at most p| relative to
line
and
both jobs arrive simultaneously, then the
release times are different, say, job
schedules job
Since delivery time for a job
schedule n*. Furthermore, job
2's
its
penalty in
penalty in the optimal off-
O*
optimal off-line objective function value ^. Hence,
-23-
is
a lower
bound on
the
.
.
<t>2
^
<>2
<
2
+ Pi
is
+ n^ax
4>2
{pj, P2}
<t)J.
Thus, the on-line method
method
^
2-competitive for 2 jobs.
is
2-competitive for
< (k -
all k'
Now
We show
1) jobs.
suppose the
that
it
must
also be
2-competitive for k jobs.
Let job
=
<t>k
where
Case
S: is
1:
S:.
< k be the
j'
=
DTj.
Sj.
the start time for job
< s^
But,
< 2
Hence,
S|.
>
qj.,
earlier
than the
Hence, job
j'
new
even
+
due date) than job
k.
<
Since the Forward algorithm
when
job k arrives, job
+
=
qj.
j'
must
Hence,
<t)k
and
,
by the induction hypothesis.
<t>i^.|
(f)!^
a higher penalty
in the (k-1) job on-line schedule.
Pj.
< (^
job k but also has the
must have
higher penalty jobs unaffected
Sj.
(t)j^*|
(}>j^.l
2:
S;.
^
<t>k-i
Case
(i.e.,
all
start at
+
pj.
in the on-line schedule.
starts earlier
j'
highest delivery time.
leaves
j
+
i.e.,
.
In this case, job
function
job in the on-line schedule,
critical
2
<t)j^.
Sj^
In this case,
K
^
=
+
i-j'
(Sj.
=
Note however
Sj
-
rj
2
<
+
-
(t)k
Pj
PjPj.
(Sj.
+
+
qj-
qj.)
-
-
(Sj.
-
rj.)
rj.).
that
for all jobs
j.
Otherwise, the interval of time between the arrival and start of job
contains
some
(since job
j
is
idle time
which
is
waiting in the queue). Therefore,
k
K
^
<t)k-.^Pi'
which implies
1=1
that
(t)j^
^ 2
j
impossible using the on-line method
(j)|^.
-24-
Thus, on-line reoptimization
As
Theorem
before,
we
is
2-competitive.
can show that the worst-case ratio of 2 (proved in
3) for on-line
reoptimization
Indeed, for the non-
is tight.
preemptive delivery time problem, the foUov^ng example (Kise and
[1978], Potts [1980])
shows
introduce forced idleness
that
(i.e.,
any on-line scheduling algorithm
Uno
that does not
does not keep the machine idle when the job
queue is not empty) must have a worst-case ratio of at least 2 Consider a
problem instance with two jobs that are released respectively at r| = and
.
1,
1
having processing times p^ = (P - 1) and P2 = 1/ arid due dates d^ = P and d2 =
and q2 = (P - D). With no precedence constraints,
(hence, the tails are q^ =
the best strategy consists of keeping the machine idle for the
and scheduling job 2 before job
solution
(P +
is
1).
On
first
The maximum delivery time
1.
method
the other hand, any on-line
time period,
for this
that does not
anticipate future job arrivals or introduce forced idleness will schedule job
at
=
rj
time
(since
the only available job). Since jobs cannot be preempted,
it is
job 2 begins processing only at time (P -
P becomes
1),
and
delivery time
And, on-line reoptimization achieves
2.
is
(2P -
1).
As
this
lowest possible
ratio.
on-Une reoptimization
In retrospect, the 2-competitiveness of
surprising in view of the worst-case
[1980]).
its
arbitrarily large, the ratio of on-line to optimal off-line delivery
time approaches
worst-case
1
bound
is
not
of 2 for Schrage's heuristic (Potts
For the l/r:/DT,^j^ delivery time problem (without preemption or
precedence constraints), Schrage's heuristic consists of applying Jackson's
earliest
rule),
due date rule on-line (with
i.e.,
whenever
a job completes, the
available job with the earliest
constraints,
a longest processing time tie-breaking
due
date.
on-hne reoptinuzation
method dispatches
Note
that,
the currently
without precedence
also chooses this "current"
HDD
job
sequence. Potts [1980] used a characterization of Schrage's heuristic schedule
(in
terms of an interference job) to prove that the method has a worst-case
have precedence constraints, we can transform the
delivery time problem l/r|, prec/DT^^^ to an equivalent unconstrained
bound
of
2.
When
jobs
version by revising the job release times and
25-
tails
as follows:
if
job
i
must
precede job
j,
set
t-
<-
max
{rj, r:}
and
qj
max
<-
{qj,
a+Pj}- Potts' result implies
that Schrage's heuristic applied to this transformed
Note
when
that,
a
new
problem
is
2-comp)etitive.
job arrives, updating the tails (to account for
its
precedence constraints) involves examining every currently available
new
ancestor of the
(Appendix
1) is
job.
more
In contrast, our on-line updating algorithm
efficient since
it
first
locates the
new
job relative to the
current bottleneck jobs, and only examines ancestors that are scheduled
6.
later.
Conclusions
In this paper
we have
updating version for one
algorithm and an
scheduling problems. The updating
class of
procedure reduces the computational
existing schedule
new Forward
developed a
effort to
accomodate a new job into an
by using information from the current schedule. In
contrast, applying a zero-base algorithm to reschedule all the jobs
would
entail significantly higher
computational results of Section
computational
4.
from scratch
effort, as illustrated
by our
Section 5 gives partial results
characterizing the effectiveness of using deterministic reoptimization for online scheduling.
Our broader purpose
efficiency
for
and
in this
paper
is
to
demonstrate the scope, and
effectiveness issues in developing on-line updating algorithms
dynamic scheduling problems.
In spite of their practical importance in
contexts such as real-time control of distributed processors, updating
algorithms have not been adequately studied in the scheduling literature. For
illustrative purposes,
can be solved
we
efficiently.
studied a single machine scheduling problem that
Exploring similar updating methods for other
scheduling objectives and contexts
is
an important research direction that
merits further investigation.
26-
Appendix
1
FORWARD Algorithm for the
1/prec,
pmtn/f j^ax problem with Consistent Penalties
Fj,
in Section 3 assumes that all jobs are
This
Apf>endix describes an extension to
simultaneously available at time 0.
handle arbitrary, but known, job release times; job preemption is permitted.
The Forward algorithm described
First, we review the notation. We are given n jobs, and a precedence
represent, respectively, the set of
arcs. B. and
contairung
G:(N,A)
graph
immediate predecessors and ancestors of job j. Each job has a "consistent"
non-decreasing penalty function fit), i.e., either fj(t) < fjCt) or fj(t) > fj(t) for all
A
m
Let pj and
the processing time and release time for job j.
completion times
Without
and
t,
<
i
j
if
and
j;
job
minimizes f,^^ =
constraints
fj(t') if t
>
t'.
rj
denote, respectively,
we assume that all ancestors
otherwise, we can set r: = Max {r^ + p^:
assume
for convenience,
that
>
of any job
loss of generality,
arrive at or before job
consequently,
f:(t)
i
6
j
Also,
B:}.
that jobs are indexed in order of release dates;
precedes job
i
Max
(fXtj):
release times,
j
=
j.
We require a
l,2,...,n}
where
tj
is
preemptive schedule
while satisfying the precedence
the completion time for job
j
in the
chosen schedule.
Scheduling Principle:
As before, ti\e Forward algorithm identifies successive bottleneck jobs,
and schedules each bottleneck job as early as possible. As before, we first sort
all jobs in decreasing order of penalties. The method starts with an empty
schedule, and progressively assigns jobs to appropriate "free" time intervals
in the current schedule. At iteration r, the method has scheduled the first (rbottleneck jobs and all their ancestors. Let I"" denote the set of available free
time intervals in the current schedule at the start of iteration r. The
1)
following steps are performed at iteration
Step
1:
among
Identify the r*^ bottleneck job
all
r:
job
j*(r), i.e,
j*(r)
has the largest penalty
the currently unscheduled jobs;
Consider each unscheduled ancestor of job j*(r) in increasing index
order: allocate to job j the first available Pj time units after the re lease time r|.
Step
2:
j
Update
the available free intervals.
Step
Allocate to job
3:
release time
j*(r)
the
first
available
r-.^^).
-1-
Pj.^^)
time uiuts after the
—
The algorithm iteratively repeats this process finding the next bottleneck
job, and scheduling this job and all its ancestors as early as possible in
increasing index order. Note that as we identify and schedule more
bottleneck jobs, the schedule becomes fragmented, i.e., it has "holes" due to
delayed releases for certain jobs. These holes are filled whenever possible in
subsequent steps; filling the holes might introduce preemptions, i.e., the
processing time of a job may be distributed over several intervals.
Let
n
denote the
be the
total
schedule constructed by the Forward algorithm, and
final
set of bottleneck jobs as Jg
Lemma:
by the Forward algorithm,
In the final schedule constructed
Wx
Note
Max
=
fj(tj)
{
:
j
e Jb
that this property also holds for the original
simultaneously available
time
at
}.
model when
all
jobs are
0.
Proof of correctness of the Forward Algorithm:
Suppose the schedule
Let
j*
be the
critical
and suppose
f j»(tj»),
optimal schedule;
n
constructed by the Forward algorithm
tlie
bottleneck job,
the r*
j* is
let f'^^gx
and the completion time
By
is
not optimal.
i.e., f^^^^ =
job that determines the penalty of this schedule,
'^^
for
hypothesis, f'max <
^
*
j
~
=
j*(r*).
denote, respectively, the
each job
max
i.e., j*
j
Let IT be an
maximum
penalty
in this schedule.
Since penalty fvmctior\s are non-
fj»(tf )•
decreasing, this inequality implies that job j* must be scheduled earlier in
i.e., t'j. < tj,.
Since the Forward algorithm schedules all bottleneck jobs as
early as possible in order of decreasing penalties, ty can be less than
tj»
11',
only
if
completes some previous bottleneck job (i.e., a bottleneck job
with a higher penalty than job j* and scheduled before tj» in the Forward
the schedule
11'
schedule n) on or after
Since
t'j,(j)
>
tj»,
tj».
Let
we must have
has a higher penalty than
j'*(r)
for
f'^^^ >
j*, fj»(r)(tj»)
>
some
r
fj.(j)(tj,).
i:»(tj»)
optimality of schedule IT.
2-
=
<
r*
denote
this bottleneck job.
However, since the job
f^^^^ contradicting
the
j*(r)
References
Aho, A. v., J. E. Hopcroft, and J. D. Ullman (1974) The Design and Analysis
Computer Algorithms, Addison-Wesley, Reading, Massachusetts.
of
J. K. Lenstra, and A. H. G. Rinncx)y Kan (1983)
"Preemptive scheduling of a single machine to minimrze maximum cost
subject to release dates and precedence constraints". Operations Research,
Baker, K. R., E. L. Lawler,
31, pp. 381-386.
Borodin, A., N. Linial, and M. Saks (1987) "An optimal on-line algoirthm for
metrical task systems", Proc. of 19th
Symposium on Theory of
ACM
Computing, pp. 373-382.
Chin,
and D. Houck (1978) "Algorithms for updating mirumum spanning
Computing Systems Sciences, 16, pp. 333-344.
P.,
trees". Journal of
Chung, P. R. K., R. L. Graham, and M. E. Saks (1989) "A dynamic location
problem for graphs", Combinatorica,9, pp. 111-131.
Even,
and
S.,
"An on-line edge deletion problem". Journal
Computing Machinery, 28, pp. 1-4.
Y. Shiloach (1981)
of the Association of
Frederickson, G. N. (1985) "Data structures for on-line updating of minimum
spanning trees with applications",S/AM Journal on Computing, 14, pp.
781-798.
Prederickson, G. N., and M. A. Srinivas (1984) "On-line updating of degreeconstrained minimum spanning trees". Proceedings of the 22nd Allerton
Conference on Communication, Control, and Computing, October 1984.
Hall, L. A.,
better",
Jackson,
J.
and D. Shmoys (1991) "Jackson's rule: Making a good
to appear in Mathematics of Operations Research.
R. (1955) "Scheduling a production line to
tardiness". Research report 43,
miiumize
heuristic
maximum
Management Science Research
Project,
University of California, Los Angeles.
Kise, H.,
start
and M. Uno (1978) "One-machine scheduling problems with earliest
and due time constraints", Mem. Kyoto Tech. Univ. Sci. Tech., 27, pp.
25-34.
Lawler, E. L. (1973) "Optimal sequencing of a single machine subject to
precedence constraints". Operations Research, 26, pp. 544-546.
-Rl
and A. H. G. Rinnooy Kan (1982) "Recent
developments in deterministic sequencing and scheduling: A survey", in
Deterministic and Stochastic Scheduling, M. A. H. Dempster, J. K. Lenstra,
and A. H. G. Rinnooy Kan (eds.), Riedel, Dordrecht.
Lawler, E.
Malone,
L.,
T.
J.
K. Lenstra,
W., R.
E. Pikes, K. R.
Grant, and
M.
T.
Howard
(1988) "Enterprise:
A
market-like task scheduler for distributed computing environments", in
The Ecology of Computation, B. A. Huberman (ed.), Elsevier Science
Publishers B. V., Amsterdam, Holland, pp. 177-205.
Manasse, M. S., L. A. McGeoch, and D. D. Sleator (1988) "Competitive
algorithms for on-line problems", Proc. 20th ACM Symposium on Theory
of Computing, pp. 322-333.
van Leeuwen (1981) "Maintenance of configurations
the plane", Journal of Computing System Sciences, 23, pp. 166-204
Overmars, M. H., and
in
Potts, C.
J.
N. (1980) "Analysis of a heuristic for one machine sequencing with
release dates
Ramamritham,
and delivery
times". Operations Research, pp. 1436-1441.
and J. A. Stankovic (1984) "Dynamic task scheduling
distributed hard real-time systems", IEEE Software, 1, 96-107.
in
K.,
S., and Y. Cho (1979) "Nearly on line scheduling of a uniform
processor system with release times", SIAM Journal on Computing,
Sahni,
8,
pp.
275-285.
Shmoys,
D.,
J.
Wein, and Williamson (1991) "On-line scheduling of
parallel
machines", preprint.
Spira, P. M.,
and A. Pan
shortest paths",
(1975)
SIAM
"On finding and updating spanning
and
trees
Journal on Computing, 4, pp. 215-225.
Tarjan, R. E. (1983) Data Structures ami Network Algorithms, Society for
Industrial
and Applied Mathematics, Philadelphia, Pennsylvania.
Zhao, W., and K. Ramamritham (1985) "Distributed scheduling using bidding
and focused addressing". Proceedings of the Symposium on Reat-time
Systems,
December
1985, pp. 103-111.
-R2
Figure 1
Consistent and Non-consistent Penalty Functions
(a)
(b)
i
Consistent
Penalty
Functions
Figure 2
Special Consistent Penalty Functions
^c
Weighted Linear
Completion
Weighted Quadratic
Completion
Time Penalty
Time Penalty
f(Q
f(Q
/
/
Lateness
Penalty
\
Tardiness
Penalty
Job
^'^'^
Figure 3
Example for Forward Zero-Base Algorithm
Ranking of jobs
in order
of decreasing penalties
:
515324
>
-
»
>
>
= number ofjobs = 6
m = number of arcs in precedence graph = 7
n
:
Figure 4
Example for Forward Updating Algorithm
Ranking of jobs in order
of decreasing penalties
Updated optimal schedule:
5,1,7,6,3,2,4
1-2-5-3-4-7-6
Table 1
Forward Zero-Base Algorithm:
Iterations for 6-node
Iteration
k
example
s
(job
Date Duef'^-^^
Lib-26-67
Mil
3
TOflD
lIBRABiES
DD7S1EE5
fl
Download