FLIGHT TRANSPORTATION LABORATORY 87-4 PARALLEL PARAMETRIC SCHEDULING

advertisement
FLIGHT TRANSPORTATION
REPORT R 87-4
PARALLEL
LABORATORY
PARAMETRIC
COMBINATORIAL SEARCH
- -
ITS APPLICATION TO
RUNWAY SCHEDULING
S
A
Dionyssios A. Trivizas
February 1987
1 IN-..
FTL REPORT R87-4
PARALLEL PARAMETRIC COMBINATORIAL SEARCH -ITS APPLICATION TO RUNWAY SCHEDULING
Dionyssios A.
Trivizas
February 1987
ACKNOWLEDGMENTS
At first I want to express my profound respect and gratitude to the members
of my doctoral committee.
Despite his many responsibilities as director of the Flight Transportation
Laboratory I enjoyed a complete "teacher - student" relationship with my
advisor Professor Robert. W. Simpson. Full of "conflicts and resolutions" this
relationship was a creative, almost operatic experience. He did not simply
guide me in research. He made me communicate it properly.
Professor Amedeo R. Odoni is responsible for luring me into the field of
Operations Research (OR) and for giving me the opportunity to work and
learn at FTL. He went beyond sharpening my powers of analysis in probability
and queueing theory. He gave me the model of a sincere and concerned teacher.
Professor Stephen C. Graves is the third member of my committee. I thank
him for his comments and advice.
I should not omit from the list of my teachers professor Tom L. Magnanti
of the Sloan School of Management. His network optimization course was a
rich nutrient for my early intuition and general understanding of OR.
It would be impossible to mention all those people who kindly helped me
one way or another. I need, however, to express my sincere thanks to my
friend and colleague Dr. John D. Pararas. With a certain degree of guilt I
admit to having exploited his limitless patience. Our discussions revealed a
good number of hidden facets and caveats in my research, and his intimate
knowledge of computers and LISP enriched my programming skills, besides
getting me out of trouble.
My research has been mainly supported by grants from the FAA, and I hope
that the results of my work will be useful in its honorable pursuits of safety
and efficiency in Air Traffic Control.
I also thank the computer lab for providing the wonderful TEX word processor.
Last but not least, I want to thank my parents for insisting that I should
become useful to myself and society. They gave me a model of self respect,
discipline and sense of purpose that has kept me going.
As a last word, I hope to have lived up to the expectations of all those who
expected something of me.
PARALLEL PARAMETRIC COMBINATORIAL SEARCH ITS APPLICATION TO RUNWAY SCHEDULING
Doctoral Dissertation in Flight Transportation - Operations Research
By Dionyssios A. Trivizas
Submitted in partial fulfillment of the requirements for the degree of
"Doctor of Philosophy" at the Massachusetts Institute of Technology, in the
Department of Aeronautics and Astronautics, February 1987.
ABSTRACT
The Runway Scheduling Problem (RSP) addresses the fundamental issues
of airport congestion and energy conservation. It is a variation of the Traveling Salesman Problem (TSP) from which it differs in three basic points: the
maximum position shift (MPS) constraints, the requirement to enforce the triangular in its cost structure and the multiplicity of runways (corresponding to
multiple salesmen in TSP).
The RSP is dynamic, requiring fast and frequent schedule updates. The
MPS constraints, designed to prevent inequitable treatment of aircraft, define
a combinatorial neighborhood of tours around a base tour, determined by the
arrival sequence of aircraft in RSP. The neighborhood contains all tours in
which the position of an object (aircraft, city etc.) in the new tour is within
MPS positions of its position in the base tour. The parameter MPS controls
the radius of the neighborhood, which covers the full solution space when MPS
equals half the number of aircraft.
We first describe the RSP and then develop a parallel processor (PPMPS)
that finds the optimal solution in the MPS-neighborhood in time linear to
the number of objects, using up to 4 MPS processors in parallel. Subsequently,
PPM'S is applied to the general RSP and a case study is presented to justify
simplifying assumptions in the scheduling of mixed traffic on multiple runways.
The case study shows substantial improvements in the capacity of a system of
three runways.
Suggestions are made on how to use the PPMPS to create fast heuristic procedures for the TSP, based on divide and conquer and node insertion strategies.
Thesis
Dr.
Dr.
Dr.
Committee:
Robert W. Simpson, Professor of Aeronautics and Astronautics,
Director, Flight Transportation Laboratory, Thesis Supervisor
Amedeo R. Odoni, Professor of Aeronautics and Astronautics,
Co-Director, Operations Research Center
Stephen C. Graves, Associate Professor
Sloan School of Management
Contents
V
1 Introduction
2 Description of The Runway Scheduling Problem (RSP) and
Previous Work
2.1
Introduction . . . . . . . . . . . . . . . .
2.2
The RSP .................
2.2.1 General Definitions . . . . . . . .
2.2.2 Definition Of The RSP . . . . . .
2.2.3 The Dynamic Nature of RSP . .
2.2.4 The Least Time Separations . . .
2.2.5 The Constraints . . . . . . . . .
2.2.6 The FCFS Runway Assignment .
2.2.7 The Cost Function . . . . . . . .
Computing An Expected Lower Bound, On Runway Capacity,
Through Scheduling . . . . . . . . . . .
2.3.1
Assumptions . . . . . . . . . .
2.3.2
Definitions . . . . . . . . . . . .
2.3.3
Formulas Used . . . . . . . . . .
2.3.4
Rational And Derivations: . . .
2.3.5
Results and Conclusions . . . .
Literature Review . . . . . . . . . . . .
Appendix . . . . . . . . . . . . . . . . .
2.5.1
Horizontal Separations . . . . .
2.3
2.4
2.5
3
Combinatorial Concepts
3.1 The Permutation Tree ....................
3.2 The Combination Graph . . . . . . . . . . . . . . . . . . .
3.3 The Cost Of A Permutation . . . . . . . . . . . . . . . . .
3.4 Combinatorial Search . . . . . . . . . . . . . . . . . . . .
3.5 The State-Stage (SS) Graph reduction of the Permutation Tree
3.6 MPS-Tree, MPS-Graph And MPS-Combination-Graph. .
9
17
17
19
20
22
26
30
33
37
37
37
38
38
45
49
56
56
58
61
63
64
66
67
70
The Bipartite Graph Representation Of The Subtrees In
The MPS-Tree . . . . . . . . . . . . . . . . . . . . . . .
3.6.2 Computing The Size Of The MPS-Tree. The Number Of
Matchings In A B-Graph . . . . . . . . . . . . . . . . .
3.6.3 The Number, T(MPS), Of Distinct Bipartite Graphs .
The Chessboard Representation And The Label Vector . . . . .
3.6.1
3.7
4
The
4.1
4.2
4.3
Parallel Processor
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
The MPS-Graph As A Solution Space. . . . . . . . . . . .
The MPS-Combination-Graph (MPS-CG) . . . . . . . . .
4.3.1 Construction . . . . . . . . . . . . . . . . . . . . .
4.3.2 Global And Tentative MPS-Graphs Reflecting The
namics Of The RSP. . . . . . . . . . . . . . . . . .
4.3.3 Symmetry - The Complement Label Vector. . . . .
4.4
The Parallel Processor Network, PPM ..
4.5
The Parallel Processor Function And Implementation . .
4.5.1 The Definition Of A Letter. . . . . . . . . . . . . .
4.5.2 The Optimization Step. . . . . . . . . . . . . . . .
4.5.3 Initialization and Termination - Adapting To The
namic RSP Environment. . . . . . . . . . . . . . .
4.5.4 The Parallel Processor Algorithm. . . . . . . . . .
4.6
4.7
4.8
-
73
78
83
88
96
96
99
102
102
. . .
. . .
. . .
. . .
Dy. . . 104
. . . 104
. . . . . . . . . . . .
. . .
. . .
. . .
Dy. . .
. . .
107
111
112
112
113
114
The Implementation Of The PPMPS . . . . . . . . . . . . . . . .
115
4.6.1 The Parallel Processor Data Structure. . . . . . . . . .
4.6.2 The Path Storage . . . . . . . . . . . . . . . . . . . . . .
Refinement In The Label Vector Storage. . . . . . . . . . . . .
4.7.1 Label Vector Tree Representation . . . . . . . . . . . . .
4.7.2 Proof That Half Of The Processors Have Only One Descendant . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Aspects Of The Parallel Processor . . . . . . . . .
4.8.1 The Effect Of Additional Constraints . . . . . . . . . .
4.8.2 The Effect Of Limited Number of Classes . . . . . . . .
115
118
120
120
123
126
128
129
5 Applications Of The Parallel Processor In Runway Scheduling
With Takeoffs And Multiple Runways.
132
5.1 Mixed Landings And Takeoffs On A Single Runway. . . . . . . 132
5.1.1 The Parallel Processor Operation . . . . . . . . . . . . . 133
5.1.2 The Contents Of The Current Mail - The Generalized
State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.1.3 Restrictions On The Feasible States. . . . . . . . . . . . 140
5.1.4 Counting The Number Of Letters In The Current Mail
142
5.1.5 General Remarks. . . . . . . . . . . . . . . . . . . . . . 145
mmmmmmft
5.2
5.3
Multiple Runways .........................
5.2.1 The Parallel Processor Operation. . . . . . .
5.2.2 Possible TIV For Crossing And Open Parallel
5.2.3 Mixed Operations On Multiple Runways. . .
Case Study. . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 Case Description. . . . . . . . . . . . . . . . .
5.3.2 Results . . . . . . . . . . . . . . . . . . . . .
. . . . . .
Runways.
. . . . . .
. . . . . .
. . . . . .
. . . . . .
147
147
148
149
150
150
151
6 Conclusions - Directions for further research.
160
* 6.1 Summarized Review. . . . . . . . . . . . . . . . . . . . . . . . . 160
6.2 Application Of The Parallel Processor In Solving The TSP. . . 163
6.3 Research Directions. . . . . . . . . . . . . . . . . . . . . . . . . 165
Chapter 1
Introduction
Adapting to the increased demand for air transport, the flight transportation network (FTN) has grown in dimension and complexity. Its accelerated
growth has raised a number of interesting mathematical problems, concerning
the efficiency and safety of operations. The FTN can be modeled as a queueing network in which the service times depend on the sequence of the users.
Thus the rate at which operations take place is variable and can be maximized
through appropriate runway scheduling. Finding the optimal runway schedule
is a combinatorial problem related to the traveling salesman problem (TSP).
The relation to TSP and other combinatorial problems is discussed further
in chapter 2. In this chapter we take the opportunity to present the airport
terminal area and practices there and to motivate the solution of the runway
scheduling problem. We also provide an outline of the remaining chapters.
At the outset, we would like to mention that this thesis is part of the
broader research effort, undertaken at the Flight Transportation Laboratory
(FTL) at MIT, to study aspects of the FTN, using analytical and experimental
means. The aspects in question include the automation and optimization of Air
Traffic Control (ATC) in the airport terminal area, with the assistance of the
digital computer. The experimental tools in FTL include a detailed terminal
area (TA) simulation, which is implemented in a flexible manner in LISP. The
simulation can generate randomly arriving and departing aircraft, which are
capable of navigating, obeying ATC instructions and displaying themselves on
the screen of a simulated radar scope. Furthermore, the simulation allows for
good controller interaction, using speech recognition. This simulation will be
the ultimate test facility of the runway scheduling process designed in this
thesis.
The TA is the essential component of the FTN, mathematically, since organizing the converging and diverging traffic in a safe and efficient manner is a
formidable combinatorial problem. Physically, we may view the TA as a set of
arrival and departure paths between a set of entry/exit points (situated on the
boundary) and the runways and, holding stacks provided at the entry points
to accommodate queues under congestion. The TA fits naturally the model of
a queueing system, in which the runways are the servers and the aircraft are
the users. The important characteristics of the users are: arrival process, time
window and service time.
The arrival process will be assumed to be a Poisson process. As user arrival
time we will consider the time that the prospective user becomes known to
the ATC, which is typically before the time he is available to operate (earliest
operating time).
By a user's time window we mean the time interval defined by the earliest
possible time and the latest possible time a user can operate, i.e. the earliest
time an aircraft can possibly reach the runway and the time it runs out of fuel.
The service times are determined by the user characteristics e.g. takeoff
acceleration, takeoff speed and approach speed. As we shall see in the next
section, the current separation standards, result in the interoperation time for
any user pair being asymmetric and variable. This property constitutes the
combinatorial character of runway scheduling by making the schedule times,
dependent on the order of operation. The schedule is defined as the assignment
of operating time and runway to each user and is equivalent to a sequence of
aircraft, given the aircraft separations. We can now search for the optimal
schedule, i.e. the aircraft sequence which minimizes the long term mean service
time, thus maximizing the runway capacity, i.e. the rate at which operations
take place. On the other hand delays, which are in general uncomfortable,
wasteful and hazardous, are very responsive, especially during times of congestion, to changes in runway capacity. So runway scheduling is very attractive
because it can help to drastically reduce delays, improving economy and safety
in the terminal area.
Traditionally runway scheduling in the TA is based on rules that evolved
mainly through experience and common sense and have limited scientific character, which may not even be necessary since the schedule is bound to the First
Come First Serve (FCFS) ordering, with very few exceptions. The reasons for
having FCFS are simple:
1. FCFS reflects an indisputable sense of justice to the users.
2. Optimal reordering is a tough combinatorial problem for computers to
solve, let alone the human operators.
The latter reason refers to the Runway Scheduling Problem (RSP),which
has been dealt with effectively in this thesis. In plain words, optimal scheduling
is a rewarding alternative to FCFS, which can be expressed in the form of the
following proposition to the terminal area user:
"if you are willing to be displaced, occasionally, up to a given maximum number, MPS, of positions away from your position in the
FCFS sequence you will on the average operate m minutes ahead of
your FCFS service-time, i.e. at tpCFS - m -
In the case study presented in chapter 5, we see that the average number of
shifts in the optimal schedule does not exceed 2 and half the time these shifts
are forward anyway. The runway capacity is almost double that of FCFS and
the delays are reduced up to 90 percent.
Now, this proposition contains the "contractual" maximum position shift
(MPS) constraints, which limit the maximum number or position shifts of a
given aircraft in the optimal schedule, to MPS. The MPS-constraints define a
combinatorial neighborhood of schedules around the FCFS schedule, which is
explored and exploited computationally. They are thus a central theme in this
thesis.
Chapter 2 presents formally the RSP, discusses its complexities and brings
out the dynamics of the arrival process which require frequent schedule updates. This requirement translates into the need for fast solution procedures.
Sources of complexity are also cited in that the least time separations, which
define the cost structure of RSP, generally violate the triangular inequality
(TI). The potential TI violation (TIV) is a consequence of the fact that one
or more takeoffs will often fit between successive landings without stretching
the interlanding separation. This condition, if unattended, will not only lead
to suboptimal scheduling but also to incorrect spacing of the landings in the
schedule.
Chapter 2 also contains a literature review and a calculation of the expected
runway capacity gains from scheduling.
In Chapters 3 and 4 we take a more abstract view of the problem looking at
a sequence of aircraft as a permutation of discrete objects. Chapter 3 prepares
the ground for the parallel processor, which is developed in chapter 4.
In
particular, it presents and analyzes the combinatorial concepts, which represent
the solution space, such as the permutation tree (PT), the combination graph
and the state-stage graph, as well as the effects of the MPS-constraints on their
size and form. Searching for the optimal permutation can be effected in the
PT or can be done more efficiently in the form of a shortest path problem on
the state stage graph.
Now, the MPS constraints reduce the representations of the solution space
to the MPS-tree, the MPS-combination graph (MPS-CG) and the MPS-graph
respectively. They further result in the appearance of stage invariant features
that allow the MPS-CG to collapse into the parallel processor network. In
chapter 4 we examine the structure of these graphs. We show how to transform
the MPS-graph into an MPS-CG with a special node structure that allows each
stage of the optimization to be carried out in parallel. The parallel processor
is, in effect, a generalized cross-section of the MPS-CG. Thus, by assigning a
processing unit to each node of the parallel processor we construct a parallel
processing machine to carry out the optimization. Chapter 4 also discusses a
refined implementation of the parallel processor and its performance.
Chapter 5 discusses the application of the parallel processor to the RSP.
Specifically we show that it can cope optimally with the TIV on a single runway and use the specifics of runway scheduling to limit the amount of additional
computation. Next we discuss the application of the parallel processor in the
case of multiple runways and suggest a heuristic way of dealing with the problem. This heuristic is demonstrated with the study of a randomly generated
case.
Finally in chapter 6 we present a summary of the thesis with conclusions
and directions for further research, discussing in greater detail heuristics for
the TSP based on the parallel processor.
Chapter 2
Description of The Runway
Scheduling Problem (RSP) and
Previous Work
2.1
Introduction
This chapter gives a detailed presentation of, and examines work related to
the Runway Scheduling Problem (RSP). Our discussion begins by placing the
RSP in its proper context, of combinatorial problems, underlining its distinctive
features. In this sense, the RSP is a relative of the traveling salesman problem
(TSP) and its derivatives which range from variations of the multiple vehicle
routing problem (VRP) to job scheduling with sequence dependent setup costs.
In all these problems we seek to:
Find an optimal sequence - permutation - of a given set of objects, that
can be aircraft, cities or jobs to be processed on machines, given:
a) a matrix of succession costs ci3 , that can be the least time separations
tij between consecutive runway operations1 , distances dig between cities
or sequence dependent setup costs.
runway operations are landings and takeoffs
b) an objective function that induces a preference order on these permutations.
c) a set of constraints.
All these problems are notorious for their computational complexity. They belong to the class of NP - complete problems and require immense amounts
of time to solve them to optimality.
The name NP, standing for "Non-
deterministic Polynomial", derives from the fact that these problems can be
cast as language recognition problems of complexity theory, and signifies that
no polynomial bounded algorithm is known for their solution. The attribute
complete distinguishes the subclass of NP problems, defined as follows: "a
problem X is in class NP-complete if it can be transformed into an instance of
the satisfiabilityproblem and if satisfiabilitycan be transformed into an instance
of X, with polynomial bounded transformations.
Though these problems transform into each other, the transformations have
little practical interest, because, usually, the special structure of each problem
dictates different solution procedures.
However, research results on related
areas are still a source of mutual inspiration and occasionally they can be used
unmodified. For instance, Parker
[30]
used ideas from vehicle routing to solve
a job scheduling problem.
The most common feature, from the practical point of view, is that most
of these problems need a backtrack search procedure or dynamic programming
to be solved optimally.
Next, focusing on the RSP, we can identify several features, which distinguish it rather sharply from its affiliates, the most notable being:
1. The dynamic nature of the RSP optimization environment where new
events - i.e. information about new arriving aircraft or sudden changes of
the weather etc. - incessantly alter the picture. Consequently, a stream
of optimal schedule (sequence) updates is in order, rendering the speed
of the optimization critical, because, if the rate of coming events exceeds
the rate of optimization, we have missed the point in optimizing.
2. The arrival sequence of the aircraft - referred to as First Come First
Serve (FCFS) sequence - is of central importance, mainly because we
want to prevent the eventuality of an aircraft being indefinitely displaced
backwards, behind later arriving aircraft, given the repeated schedule
updates.
To avert such a possibility we instituted the Maximum Position Shift
(MPS) Constraints which prevent an aircraft from being displaced by
more than a prespecified number (MPS) of positions (shifts) away from
its position (order) in the FCFS sequence.
Fortunately the MPS constraints endow RSP with a special structure that
can be exploited in order to expedite the search for an optimal solution.
The MPS constraint solution space is central to the work in this thesis
and has no direct analog in the literature. The MPS constraints define a
neighborhood of sequences, corresponding to tours in the TSP, just like
the k-exchange heuristic defines a neighborhood of tours2 for the TSP,
and we have designed an algorithm (AMPS) that will efficiently find the
optimal sequence in the neighborhood.
In fact we could now try to solve the TSP by applying AMPS on some
initial guess tour. Such a tour could be anything from a random tour to
the result of a heuristic algorithm.
3. The distance matrix of the Euclidean TSP, where the triangular inequality holds, assures our ability to make incremental evaluations and rejec2More
precisely the sequences correspond to Hamiltonian paths, but the problem of finding a
path and that of finding a tour are easily transformable to each other
tions, based on these evaluations, of (partial) permutations. However, the
least time separation matrix in the presence of both takeoffs and landings
violates this inequality, thus creating special problems (to be dealt with
later on in this work).
4. The multiplicity of runways makes the least time separation matrix a
four dimensional quantity because typically the time separation between
the same two successive operations, of aircraft i and
j,
on two different
runways, will differ from their separation if they operate on any other
runway pair or if they both operate on the same runway.
5. In the runway scheduling problem, we are mainly concerned with the
user's perspective, which is, of course, the aircraft delays, as opposed to
the system's perspective. For example in the classical TSP or vehicle
routing we wish to minimize the total distance traveled, by the salesmen
or the vehicles. This is equivalent to minimizing the time the runways are
busy (occupied). But this objective reflects an aspect of the system and
is only pertinent in RSP implicitly, because it induces an improvement
on the system capacity - rate of operations - which in turn has a drastic
effect on delays of aircraft especially in times of congestion.
It has thus been suggested to employ an alternative objective function,
the Total Weighted Delay (TWD), which directly expresses the user's
interest, and offers the possibility of fair discrimination among the users
according to their special needs. In this work we investigate both objectives of minimum busy time and TWD. In TWD we minimize the
weighted sum of the waiting to land times for all the aircraft. This objective divorces RSP from the classical TSP because now the succession
cost of two operations is not known ahead of time but depends on the
previous sequence.
An analog to the TWD, called average tardiness, is found in the job
scheduling literature. It differs, however, from the TWD in two ways.
First, tardiness starts counting from some specified delivery deadline and
not from the time the job is available to be processed. Second, and most
important, is that in most of the job scheduling problems there is no
sequence dependent setup costs.
6. Another feature of the RSP which is also found in the vehicle routing
literature is the time window constraints. In the RSP, the time window of
an aircraft is the interval between the aircraft's earliest possible operating
time and its latest possible operating time, perhaps before it runs out
of fuel. Usually the last possible operating time is not binding, since
aircraft always carry extra fuel. There are a few interesting comments
however about the earliest possible times Ef. First of all, each aircraft
i will have a different such time for each of the possible runways. So we
need a superscript, as well, to denote the runway. Often, we will omit
the superscript if we talk about only one runway or with the implicit
assumption that all the E[ s are approximately equal for our purposes.
Next, we observe that the interval between successive earliest arrival times
is a random variable, with an exponential probability distribution. This
follows from the assumption that the aircraft arrivals follow a Poisson
process. Now the expected value of this interval is the inverse of the
arrival rate. These observations enable us to use results from queueing
theory in order to predict the macroscopic behavior of the system, and
motivate the optimal scheduling. The basic insight from queueing theory
is that the delays of users - aircraft in the RSP - in a system is very
sensitive to the value of p, where p is the ratio' of the arrival rate to the
'The expected delay for a system of one server is proportional to (1 - p) -1.
mean service rate, which is the runway capacity in the RSP.
Given this result we may conclude that optimization is needed most in
congested situations when the arrival rate approaches the mean service
rate. The build up of queues, in such case, of aircraft that hold enroute or
within the airport terminal area, creates also the most interesting computational situation, because now it is more difficult to find an optimal
solution. With a small arrival rate the earliest time constraints will be
strongly binding, thus reducing the number of feasible schedules to a
minimum. However when large queues build up then the earliest time
constraints become secondary because increasingly all the aircraft in the
queue are available for immediate operation.
Such congested situations can be triggered or accentuated by less frequent events like bad weather, runway changes, accidents etc.
There
good "traffic management" will also contribute to safety.
7. Another appropriate remark is that the aircraft become known to the
ATC before their earliest possible time of operation. This "lead time"
is a parameter of the optimization and contributes to better scheduling.
Ideally we should know about all the future arrivals, within a time horizon, say a day, right at the outset.
8. On the other hand, in a practical situation, we will also have to fix the
schedule of aircraft awaiting operation typically 10 to 20 minutes before
they actually operate. This "advance notice time" is essential for aircraft
to carry out their prelanding checks and maneuvers which will allow them
to meet the runway safely and on schedule time or to taxi to their assigned runway for takeoffs. Ideally, from the optimization's point of view,
this advanced notice time should be as small as possible, whereas from
the pilot's and controller's perspective, small notice time is undesirable
because it increases their workload and level of stress. So, it is interesting to find a good value of the advanced notice time that will resolve this
conflict. Such a value could be determined experimentally.
Alternatively we could have requested to fix the schedule for the first say
5 or 6 aircraft in the tentative schedule. In general, we expect that the a
new aircraft arrival will affect the optimality of the tentative schedule ir
and its effect may in principle ripple back to the beginning of 7r. However,
given a large enough number of aircraft in the queue, we anticipate that
the front part of the reoptimized schedule 7r will not differ greatly from
the the corresponding front part of 7r. We are thus not losing very much
by fixing the schedule for the first few aircraft if we have 5 to 10 times
as many in the queue. These issues, of lead time and advanced notice
time, are secondary to the optimization problem and will be addressed
later on in a experimental fashion since they are intractable to deal with
analytically.
These special features of the RSP, in conjunction with the peculiarities of
the other problems, isolate it computationally. So we have concentrated our
efforts to tailoring an algorithm to the needs of the RSP and as we said above
we hope to use this algorithm, in future research, to address the TSP and the
vehicle routing problem.
In the following presentation we start with definitions of relevant quantities,
culminating with a formal definition of the RSP. We discuss its important
aspects including its dynamic nature, the origin of the least time separations,
the origin and effect of the constraints, the sequence evaluation rule and the
choice of cost function.
Further on we present some properties of the optimal solution which allow us to calculate an expected lower bound on the improvement in runway
capacity under optimization, assuming only landings of the same weight class
on a single runway. The bound is presented as a function of MPS and is derived by considering the reduction in the average time separations E(sep) after
optimization. This reduction in E(sep) is found to be more than half of the
expected total reduction margin for MPS values of 5 and 6.
The chapter concludes with a literature review section where we give an
account of preceding work on the RSP and recent developments in related
problems.
2.2
The RSP
2.2.1
General Definitions
Before looking at the definitions the reader is warned that some symbols,
especially i and j serve as dummy variables and are redefined within each new
environment (context). So i could be a particular aircraft or the position of an
aircraft in the schedule.
F
={all prospective aircraft over a time horizon h
at a given current time, to}.
Later on we will give more precise meaning to the term horizon.
Q
=
{
all prospective aircraft known to ATC at a given current time, to}
n = the total number of users we consider in our optimization.
S
= {i : 1 =' i - n}
Q, if for instance we are limited
superset of Q if we include predicted
= a subset of F that may be subset of
by computational resources, or
aircraft. S is the input to the algorithm that solves the RSP.
R = the set of runway indices, r.
I; = time when aircraft i becomes known to the ATC and we will call it
aircraft entry time.
El = earliest possible time aircraft i may operate on runway r
= arrival time at r if not delayed.
E; = min,{E}
FCFS = First Come First Serve order of S based on ascending E's.
Note that we have a choice in defining the FCFS order and runway assignment rule in the presence of multiple runways. Also for convenience
we will use i to identify the aircraft in the i th FCFS position. Hence:
i<j
=>- Ej
Ej, Vij < n
ty= least allowable time separation between aircraft i (leading) and
Ir
j
(fol-
lowing), Vj E S on the same runway.
Here we note that we expect coupling in the operations on different
runways. This coupling or interference will be reflected in an increased
number of subscripts for the minimum time separation. Thus
t(ip),(j,r) = least time separation between aircraft i and
j
operating in succes-
sion on runways p and r and obviously
t(iP),(jP)
=
t;,j
Now we observe that a schedule assigns to each aircraft a time of operation and a runway. The time assignment is equivalent to a permutation
of the aircraft and a rule of assigning times given the minimum time
separations and earliest possible arrival times. We will thus denote:
7rS =
-
=ri
{ permutation of the available aircraft, i.e. the set S},
such that:
aircraft in position k of ir, and:
(rfs)~1 =
position of the aircraft i in 7r, the superscript -1 denotes the in-
verse.
Note that we will frequently drop the superscript S when there is no
ambiguity.
rf = the runway assigned to aircraft r.
t,,-
h(,rS,rs;{Er})
= the time evaluation rule assigning operating time to the aircraft in
position
j
of the sequence 7rs. The function h, to be discussed later on
in detail, is parametrically dependent on the set of the earliest possible
times because the time aircraft 7ry is scheduled to operate, under a given
sequence 7r, is affected by the Er,Vk < j and not solely by E..
d =ti
- Ei=
delay incurred by aircraft i
w= weight of aircraft i reflecting fuel consumption, number of passengers
etc.
2.2.2
Definition Of The RSP
Now we will define problem P over the aircraft set S as follows:
Given S and the set {Ei}, containing the
earliest time arrivals of the aircraft in S, find
the permutation irS and runway assignment
rs that:
minimize z = f(rs,r)
(2.1)
subject to the schedule time evaluation rule:
t
= h(ir, r 5 ; {E})
-,.
Vi E S
(2.2)
and the position shift constraints:
g(i,7rs) E 0 Vi E S
(2.3)
where g and h are given functions to be considered shortly.
For reasons, that we will talk about next, RSP is P defined over S = F and
we will call it dynamic (RSPD) to underline the fact that the set F - of all
prospective aircraft - is not completely known ahead of time, but reveals itself
as time goes on. We will also call static (RSPS) any instance of P in which
S=
Q and
2.2.3
all the prospective aircraft are known ahead of time.
The Dynamic Nature of RSP
r In the above definitions of static and dynamic RSP we implicitly attribute
the dynamic nature of the problem to the lack of timely information about the
aircraft to come. In this sense, if we knew all the future arrivals a priori, then
we could schedule them absolutely optimally by solving a huge static problem.
Evidently not knowing all the future arrivals condemns our schedule in suboptimality, because, though we are prepared to update the schedule upon receiving
new information, we have to make irrevocable decisions, i.e. operate aircraft
according to the tentative best schedule. We may not have the opportunity
later to refute these decisions while reoptimizing.
The quality of our scheduling process - of continuous tentative schedule
updates - can only be judged a posteriori at the end of the operating horizon,
by looking at the actual operations. We can then compare it to the FCFS
discipline or the absolute optimal schedule which can be obtained by solving
the complete static problem.
Technically, the formulation of problem P, aims to bring out the identical
nature of the static and dynamic problems. Their only difference is lies in their
input aircraft sets. However, we only know how to solve exaclty the static
problem, when S equals the available aircraft set
Q,
using procedure A(S)
which takes as inputs a set S and returns its optimal sequence, subject to the
existing constraints. We then propose to construct procedure B(S) that solves
the RSPD based on the set of optimal sequences,
{A(Qi): Qi E Q}
1.11
1111111WIIM-1'.
i ,
where
Q is
a set of collectively exhaustive subsets of F.
The composition of each
Q, is
determined, by the aircraft entry times, and by
the structure of B (S) itself. In turn, the value of B (F) depends on Q and
obviously
v(B(F)) <
v(A(F))
where v(A(F)) is the a posteriori value of the problem, constituting an absolute
best bound on the problem.
We want to briefly extend this discussion in order to point out that whereas
in the dynamic RSPD we are forced (by the lack of future information) to break
up the problem by partitioning the set F, we might indeed find decomposition
desirable. We could for instance make a good "divide and conquer", or an iterative heuristic procedure - where we apply A(S) on partitions of the available
set - in order to solve large static problems, sacrificing optimality in favor of
speed of computation and savings in computational resources. In fact some
of the initial plans of attacking the RSP were focusing on such fragmentation
ideas. However the parallel processor, developed later, proved by far superior
because it very efficiently solves the landings only problem optimally, and it
can be easily extended to accommodate takeoffs and multiple runways.
Busy Periods
So far the reader may still wonder what is a typical time horizon.
To
answer this question we first observe that the arrival rates of aircraft are not
constant during the day. There are peak hours of extreme congestion, and time
periods - most notably at night - where the system is completely empty. So like
any other schedule the FCFS schedule contains periods of contiguous activity,
which we call busy periods, interchanged with idle periods where the system
is empty. A Busy Period (BP), which in technical terms is defined as the time
interval in which all operations on the runways are separated by their least
W1111-WWW""MI, -
time separations, resulting from the FCFS schedule, is a natural time horizon.
This is so because the composition 4 of and the optimization within a given busy
period are completely independent from any other busy period.
A practical point here is that if we want to evaluate experimentally our
optimizing process, we can generate random aircraft arrival times and run our
procedure based on the BPs that emerge from that random FCFS sequence.
9
2.2.4
The Least Time Separations
The least time separations, whose variability obviously gives rise to the RSP,
originate from the existing horizontal ATC radar separation standards, pertaining to safety in the terminal area especially during Instrument Meteorological
Conditions (IMC). However, we will not be concerned with the derivation of
the least time separations. We will only single out their characteristics that
relate to the solution of the RSP. For samples of horizontal separations the
reader is referred to the appendix, though for an expert discussion on the topic
the reader is referred to R. Simpson [37].
The least time separations can be seen as entries of a 4-dimensional matrix
ti,
were i and
j
rj
are successive aircraft in a sequence, with i preceding
j,
and ri, r6
are their runways assignments. (We may, occasionally, drop the runway indices
if we talk about successive operations on the same runway.)
Crossing Runways
Lets now look at an actual scenario in which the least time separations take
on meaning. Suppose that we have decided on which aircraft will land on each
4the composition of busy periods is independent if we assume a memoryless arrival process for
aircraft, for example a Poisson arrival process
of the runways, assuming for the moment that we have only two crossing runways. In landing the aircraft on each of the runways we now have an increased
responsibility. While we still have to maintain the least time separations between successive aircraft on the same runway we have to also ensure that there
is no conflict with aircraft operating on the second runway. Thus if A1 and A 2
are successive operations on runway A and B 1 is an operation on runway B,
w~iere A and B are crossing, then if B 1 is inserted between A1 and A 2 it has to
wait until A1 has cleared the runway crossing point. Then A 2 will have to wait
until B 1 has cleared the runway crossing point etc. Schematically we have:
2tA
B1
1
,A
2
A 2 ...
>tA 1 ,B 1
Obviously the notation is not consistent but convenient with
the same runway separation and
tA,,B,
tAi,A
2
being
being the cross runway separation.
The case of crossing runways is easy and we can, for simplicity of illustration, assume, that the inter runway time separations are all constant, irrespective of the type of aircraft and type of operation (landing or takeoff), and
approximately equal to 30 seconds. These 30 seconds account for an average
time, required by a landing or a takeoff to clear the runway crossing point,
augmented by some safety factor. Of course, in a real situation, we would use
precise numbers.
Parallel Runways
Next consider a scenario of parallel runways. Depending on the distance between the runways the FAA regulations distinguish between close and widely spaced
parallel runways.
The wide parallel case is simple. Given that runways A and B are widely
spaced parallel then a landing on A followed by a landing or takeoff on B
or, takeoff on A followed by landing on B are completely independent, in the
sense that the two consecutive operations can take place simultaneously on
the two runways. A takeoff can be initiated as a landing aircraft is touching
down or two aircraft, landing on A and B respectively, can fly in parallel
The only restriction is for successive takeoffs where
their final approaches.
a minimum separation is imposed according to the departure routing of the
aircraft involved.
If the runways are close parallel, the FAA regulations further restrict consecutive landings on the two runways to maintain a minimum horizontal separation
whose projection on the runway axis has to be greater than 2 nautical miles
(NM). Based on this horizontal separation and denoting with vk the approach
speed of the aircraft k we can calculate the least time separation between aircraft i on runway A and j on runway B using the following formula:
ti,AaE =
-
21
+ F max
0,
(vi
vi
- -)
1
V;
where 2 NM is the minimum horizontal separation to be maintained during the
final approach and F is the length of the final approach. In case the leading
aircraft i is faster than
j
then the minimum separation will occur at the outer
marker, which, on the runway extended line, marks the beginning of the final
descent of aircraft onto the runway threshold, and the term
F max 0, - -)
vi
vi
represents the additional time that the following aircraft j needs to cover the
final approach, in excess of the time 2 it needs to cover the least horizontal
separation. Otherwise, if aircraft
j
is faster than i, the minimum horizontal
separation will be realized when i is at the runway threshold. Thus j has to
only cover the minimum horizontal separation and so the second term on the
RHS is zero.
Single Runway
The same calculation carries over in the case of successive landings on the
same runway. The same formula holds but now we have to consider a minimum
horizontal separation si, which is variable between 3 and 6 NM depending
on the weight category of the successive aircraft.
The reason for increased
separations is that heavy aircraft generate severe wake vortices that can greatly
disturb the stability and general welfare of following aircraft if the later come
tbo close. So the least time separations on the same runway can be expressed
as:
t
=
-'+F
vi
max 0,
(Vi
vi
Finally we are left with takeoff to takeoff, takeoff to landing and landing
to takeoff separations on the same runway. The takeoff to takeoff separation
depends again on the departure procedures. The landing to takeoff and takeoff
to landing time separations depend on the runway occupancy times of the
preceding aircraft. As in the case of crossing runways above we can again
assume a constant 30 seconds for all these times.
The important point is that the least time separations do not satisfy the
triangular inequality5 . So, the sum of the least time separations between a
landing L, followed by a takeoff T and then by a landing L 2 might be 60
seconds where as the minimum landing separation between the landings only
would to be perhaps 130 seconds. Thus if were to evaluate the schedule by
adding successive time separations we would space the landings dangerously
close.
In order to correctly evaluate the schedule we have to keep track of a last
landing record, for each of the runways, during our sequential schedule evaluation, which will be updated upon every new landing and will help us maintain
the correct spacing of landings and takeoffs.
5By triangular inequality we mean that each of the sides of a triangle has to be less than the
sum of the other two
2.2.5
The Constraints
The constraints of this problem can be broadly grouped in two categories.
As can be seen from the definition of problem P we are required to use a Time
Evaluation rule, equation 2.2.2, and to satisfy the inequality constraints 2.3
called indefinite delay constraints. We will examine first the later constraints
vwhich require considerably less explanation than the former.
The Maximum Position Shift (MPS) Constraints
The MPS constraints form a set of artificial of constraints, imposed on the
RSP. Their form is given by the inequalities 2.3 and they are intended to:
ensure that no aircraft is indefinitely delayed, given the repeated
schedule updates upon the arrival of new aircraft.
This is achieved by limiting the position of an aircraft, in any MPS feasible
sequence, to be within a prespecified maximum number of position shifts away
from its position in the FCFS sequence. Thus we can write:
{pos}{
_
poirnpstoFCFS
position
resequenced
}
<
MP
MPS
Using the foregoing general definitions where Irk is the aircraft in the k th
position of the sequence
7r,
7rj is the position of aircraft i in gr we can rewrite
the MPS constraints as follows:
(7rPCFS)~1-1
_MPg
=
Owd
!
(7rFCFS)-1 + Mpsackward
and recalling that aircraft are named after their FCFS position, i.e.
i - MPS!*=''''
<
= i + MPSVi"****'d Vi
where S is the set of available aircraft.
e
S
Vi E S
7 rfCFS
=
(2.4)
Note also in the above equation that the superscripts backward and forward
suggest an asymmetric use for the MPS constraints which enables the controller
to fix the position of any aircraft in any feasible schedule to a desired one, within
the MPS window6, by setting
MPSorward = -MPS"ackward
In the next chapter we will see that symmetric MPS constraints endow
the RSP a regular structure that can be exploited algorithmically and can be
analyzed mathematically.
Latest Possible Time (LPT) Constraints
It is possible to also consider Latest Possible Time (LPT) constraints,which
can either ensure that no aircraft is assigned an operating time after a critical
time, when for instance the aircraft would run out of fuel, or, in an emergency,
they can offer an alternate way to give absolute priority to an aircraft with an
engine fire.
The LPT constraints are provided for in our solution procedures but we
will ignore them because usually they are not binding, as aircraft always carry
extra fuel, and thus, they do not contribute to the computational aspects of
the problem.
The Time Evaluation Rule (TER)
The TER is a recursive way of assigning operating times to a sequence
of aircraft, given the aircraft's earliest arrival times and the subtleties due to
mixed takeoffs and landings as well as multiple runways. The TER is first of all
necessary in order to find the operating times under the FCFS discipline and
'In case the desired position is not within the MPS window one could change the original
FCFS sequence, so that the aircraft in question has the appropriate place.
we will shortly discuss its alternative use as an implicit way of implementing
the natural constraints of the problem which are the following:
Least Time Separation constraints: So far we have treated the least time separation between successive aircraft as a cost. However, it reflects indeed
a constraint, which forces the schedule times of consecutive operations
to be spaced apart by prespecified amounts, as described in the previous
subsection.
Earliest Possible Time (EPT) constraints: they ensure that no aircraft is assigned operating before its E,.
The EPT constraints have a favorable effect on the speed of finding an
optimal solution because they reduce the number of feasible sequences.
TER With Landings Only On Single Runway
Now, in the absence of earliest time constraints, with landings only on a
single runway and given a sequence 7r of aircraft, the Time Evaluation Rule
(TER) would find recursively the operating time for the aircraft 7ry in the
j
th
position gr, based on the total time of the preceding successive operations. This
simplest TER relates the operation times of successive aircraft
7r;
preceding ?ry
in the following way:
t
=
t,; + t,
Given t,. the aircraft that operated last on the runway we can recursively compute all the t, 's Vj E S. Strictly speaking the equality should be replaced by a
greater or equal sign to correctly represent the least time separation constraint.
However we are, in principle, always able to satisfy the equality provided that
aircraft
j
can be on time. Otherwise we have to modify the TER as follows:
t,, =
max(E,,t,
1 ; + tl,,,)
and now the earliest time constraints are also satisfied.
TER With Landings Only And Multiple Runways
Extending to the case of multiple runways we have to ensure that inter runway separations are satisfied, as described in the previous section. In deciding
to land aircraft 7r on a runway ry, where 7r is the given sequence, for which we
want to determine the operating times, and r is its associated runway assignment, we must check on the landing times of all the preceding aircraft 7r;, with
i < j, that were last to operate on each of the runways and ensure that all the
time separations - same runway and other inter-runway - are satisfied. So the
TER becomes:
t',. = max (Max (ti
iEl'
+ tr,,r,,jrj,), E.)
(2.5)
where:
I = {i : r; = the last aircraft to operate on its assigned runway r;}
TER With Mixed Landings And Takeoffs On A Single Runway
If we were to consider a single runway and mixed takeoffs and landings then
we would have to keep track of the preceding landing if the current aircraft in
position i is a takeoff, following the discussion in the previous section. Thus
the TER now becomes:
t,,. = max (t, + t',',,
1t,,
+ t,,,.
Er,)
(2.6)
where I is the position, before position i, containing a landing, if 7ri is a takeoff,
and l +- i if xi; is a landing.
Mixed Takeoff And Landings On Multiple Runways
Finally when we have both multiple runways and mixed traffic, we obviously
have to keep track of the last landing on all the runways. So we can write the
TER as:
= max (m
t.
(t','
+
t,,
)
ax (t'. + t,,,,,,,),
E.)
(2.7)
where:
I = {i : i =
the last aircraft to operate on runway ri, i < j, ti,r,,j,r > 0}
f=
the last aircraft to land on runway rg, i <j, ti,r,,
{i: i =
2.2.6
> 0}
The FCFS Runway Assignment
In the case of multiple runways we clearly have a choice in assigning operating runway to aircraft. In fact with m runways and n aircraft we have m"
such possible partitions. For the FCFS sequence we have chosen the following
rule for the runway assignments, which states that:
each aircraft will be assigned to the runway on which it can safely
operate as early as possible, given the runway assignments of its
preceding arrivals.
This rule will generate a unique schedule that reflects a small degree of optimization. In order to calculate the FCFS operating times we can use the
TER with a small modification to account for the case of multiple runways
when operations on parallel runways are uncoupled. As we will see later, a
strict requirement for the optimization is that the schedule times should be
monotonically increasing with increasing position in the sequence. This monotonicity assumption restricts the inter runway least time separations to be at
least equal to zero. However this requirement is lifted for the FCFS sequence
and an aircraft rfCPs in position i can be scheduled earlier than its predecessor
?rfCFS if r;,
ri_1 and the operations on these runways are independent.
Recalling that
as follows:
rs
= i, we can formally restate the above rule recursively
MMWJbM"_
Assign to aircraftj the runway rAand time ti such that:
ti = t
=
min max (E,7, max (t+
,jER
iEI
i
EC
+ ti,,,j,rj))
(2.8)
where:
R is the set of available runways and
1 = {i : i = the last aircraft to operate on runway rg, i < j, ti,rij,r > O}
£ = {i : i = the last aircraft to land on runway r;, i <j, t;,,,,. > 0}
Note that the clause t;,,j,.
> 0 in the definitions of I and L allows for
a later aircraft to operate earlier than its preceding aircraft in the absence of
runway interference.
For example, this condition of zero inter runway separation, is met when
we have a takeoff T succeeding a landing L in the FCFS sequence, and two
close parallel runways A and B. Suppose now that the landing can land at its
earliest on runway A at time tL. Then the T can be initiated at
tA < tB,
if
not limited otherwise.
This "forward slip" of the takeoff would not be permitted by the optimization in general because it revokes the monotonicity of the objective function.
The forward slip could be achieved through an alternative sequence. This argument thus implies that increasing the number of runways we should increase
the MPS to allow more sequences to be examined. Note, however, that we
can always apply the FCFS TER on the resulting optimal sequence to take
advantage of any additional improvements, from forward slips, that the MPS
restrictions did not allow for. We will return to the topic later on when we will
talk about the application of the parallel processor in the RSP with multiple
runways.
On The Use Of The TER As A Constraint
In this subsection we want to make just a few remarks on the TER. The
TER has been constructed so as to incorporate the earliest possible time (EPA)
constraints, thus always producing an EPA feasible schedule, and now, we want
to compare this approach with the alternative where a simple TER would
simply add up all the previous time separations. In the latter case we would
Ir
have to check for EPA violations explicitly, casting inferior and rejecting any
sequence 7r, in which any aircraft 7ry violates substantially 7 its E,,, unless all
its preceding aircraft 7r; had
E,
Er,
Vi <j
If this condition holds it means we have discovered a new Busy Period in 7r with
less than n aircraft and so we have a partition of the aircraft in two independent
sets. We can then optimize these two sets individually.
Now such an approach clearly suffers from technical complications if more
busy periods appear in the optimal schedule, because then we have two smaller
problems to solve and we must further decide which of the aircraft in the second
BP is going to be first. Additionally we would still have to modify the TER to
properly handle mixed takeoffs and landings and the multiple runways.
Taking this modification a step further, by also including the Ei in the
TER we do away with fathoming due to EPT infeasibility, in an explicit way.
We have not, however, affected the size of the computation, because this fathoming is indirectly carried out, most of the time, by virtue of the fact that
any sequence that is EPT infeasible according to the simple ETR and furthermore violates the above condition will have an exorbitant cost that will cast it
inferior. This inferiority will be established very quickly by the TER in the incremental evaluation of the operating times and, for instance, a combinatorial
7 By substantially we mean some fraction of the average least time separation.
search procedure will backtrack. Meanwhile we have done away with tedious
details concerning the formation of new BPs and the best aircraft to initiate the
schedule in the subsequent BPs if more BPs emerge in the optimal schedule.
More will be said on this issue after we discuss the solution procedure.
2.2.7
The Cost Function
The cost function f(7 rS, rs) maps the set of the cartesian products of permutations
7rS
and runway assignments rS onto the real numbers. The purpose of
having a cost function is to minimize the aircraft delays in the airport terminal
area.
There are two basic requirements that have to be satisfied by the cost
function, with respect to our solution procedures, and will be looked at in
detail later on in the next chapter, namely:
" The monotonicity with increasing position in the sequence.
" The separability of the cost function, which entails the ability to evaluate the cost function incrementally. This property ensures that we can
make incremental inferences about the quality of incomplete sequences
containing the same aircraft.
It is also important to note that the quality of solution will have a dependence on the aggregate level of constraints. In this respect we first note that
the gains in optimality will be a function of maximum number (MPS) of allowed position shifts. A lower bound on the expected improvement is given in
the following subsection as a function of MPS, for landings only on a single
runway. The interpretation of this lower bound is that by optimizing we will
get an expected improvement at least as good as the bound.
Two alternative cost functions have been proposed, one of which minimizes
delays explicitly and the other implicitly, by expediting the rate at which op-
erations take place i.e. the operational capacity. We will begin with the latter
cost function, called Total Busy Time.
Total Busy Time (TBT)
As we discussed in the introduction to this chapter (see page 14), optimizing
the operational capacity is important, especially in situations of congestion and
we quoted the queue theoretical result which illustrates the sensitivity of the
system delays to minute changes of operational capacity.
Now the operational capacity is the inverse of the average service time which
is in turn the total busy time - defined as the sum of the busy periods of the
system - divided by the total number of aircraft in them. The queue theoretical
mean service time translates, of course, to the average least time separation,
and is a function of the aircraft sequencing. Thus, in order to maximize the
operational capacity we have to find the sequence of aircraft that minimizes
the total busy time (TBT).
In the absence of earliest possible time (EPT) constraints we will certainly
have only one busy period and therefore minimizing the TBT is equivalent to
minimizing the last operating time, given by the time evaluation rule (TER)
discussed in the previous section. We can also formally write the cost function
as:
z T(grS,)
=
t"
min
zEP.
(where n is the total number of aircraft)
However if, due to the EPT constraints the optimal schedule contains more
than one BPs, then it suffices to minimize the last operating time within each
one of the new BPs. The average least time separation will be computed from
the sum of the lengths of the BPs in the optimal schedule.
Intuitively we can see that the existence of several BPs in the optimal
schedule is a positive sign. The more BPs we have in the optimal schedule the
better the optimal schedule compares to the FCFS.
Total Weighted Delay (TWD)
The total weighted delay is a cost function that explicitly minimizes the
aircraft delays. In addition it provides for discriminating the aircraft by means
o[ weights, which reflect their individual characteristics as for instance fuel
consumption, number of passengers etc. In TWD each aircraft i contributes
to the objective function an amount of delay equal to the length of the time
interval between its earliest possible arrival time E; and its time of operation
ti, weighted by a certain number w;. So we can write the following expression:
zTWD
(2.9)
wi (ti - E;)
=
rSrS)
i=1
where the operating time t; will be given by the TER.
We observe that this expression is adequate if we have to compute the TWD
of an entire sequence. In a backtrack search, as well as in dynamic programming
solution procedures we are interested in incremental evaluation of the TWD
in the sequence. Thus we are interested in the value of the objective (cost)
function up to position i in the sequence. In the case of TBT, that was no
problem, since the incremental value of zTEBT was given by t,,. With TWD, if
all aircraft are present at the beginning of the optimization, we can also easily
write down an expression for the incremental values of the TWD objective as:
TWD
zS--
=
TW
=zS-rwn
(X,
-
+
(t,,_-,
-
ti)
W -
where
W
=
twj
=
total weight
jES
In this recursive formula we note the following:
W,,.
* the second term of the RHS is the increment in delay accrued by the
remaining (n - i) aircraft between the (i - 1) st and i th operations.
It is equal to the time interval between these two successive operations
multiplied by the total weight of the aircraft which are being delayed.
" the factor (t, ,1 - t, ) is used for the time interval because it accounts for
the fact that the earliest time constraints are already incorporated in the
TER, as well as for the insertion of takeoffs. In case of no earliest time
violation with landings only this expression reduces to
.
When not all the aircraft are initially available then the delay increment
will have to be modified. It will be the sum of the following two parts:
1. The sum of the delays of the aircraft that arrived in the last interoperation
interval which equals
(
t,,_,<E,,<tj
w,,i (Er - t,,i)
2. The weight sum, multiplied by the interoperation interval, of only those
aircraft, in the set U, that arrived prior to tj_, and have not yet operated.
We can thus write this contribution as
(t,,_, -
t, 1 )
w,
rEU
2.3
Computing An Expected Lower Bound,
On Runway Capacity, Through Scheduling
In this section we develop and present a lower bound (LB) on the expected
runway capacity under scheduling. The LB is affected by the MPS constraints,
which were described in the previous section. In particular, the LB is a function
of the maximum number of permitted position shifts (MPS). Furthermore, the
LB depends on the approach speed range of the aircraft using the runway. In
the following discussion, after a summary of the assumptions, definitions and
relevant formulas, we analyze the LB as a function of MPS and of the speed
range and present the results for a set of different speed ranges. The section
concludes with quantitative remarks on the results.
2.3.1
Assumptions
1. Landings Only
2. Single Runway
3. One Weight Class
4. Time Window Constraints Are Not Critical
5. Poisson Arrivals Of Aircraft
2.3.2
ti;=
L
Vi
Definitions
+ 6t
6ti; = F max (0, i
S = 3 N. Miles
F = 5 N. Miles
-
)
a = min. speed
b = max. speed
k = the length of the interval within the FCFS sequence
in which we find the E(6t).
.j,,= E (minEK
( )),where
K = a set of k aircraft.
ty.= E (maxjEK W)
dif =ta
E (bt) =
2.3.3
- t,,in
di
Formulas Used
1.
2.
2.3.4
E
E (max
(0,
, 1ViV)
-
."2
(b-a)
2
b-a
Rationale And Derivations:
The expected runway capacity is defined as the inverse of the mean service
time, where, by mean service time we imply the average least time separations
between successive operations on the runway. Therefore decreasing the mean
service time will increase the runway capacity, which, in turn, is responsible
for delay reductions. In the following discussion we will try to show that there
is an absolute minimum decrease in the mean service time, E(bt), that can be
achieved through optimization, corresponding (to first order of accuracy) to an
equal percentage increase in capacity. We will then show how this relates to
delay reductions.
This discussion is valid for a traffic composed of same weight class aircraft,
waiting to land on a single runway and with non binding earliest time constraints.
We start by showing how to compute the mean time separation and for this
purpose we further assume that the aircraft arrive according to a Poisson arrival
process, which is equivalent to assuming that new arrivals are independent
of previous ones. Also often the approach speed of the arriving aircraft is
discretized. So tij will stand for the least time separation between aircraft
belonging to speed classes i and j. If we let P be the probability of an aircraft
belonging to class i, we can compute the mean service time E(t;;) for a FCFS
sequence - the random arrival sequence - with the following formula:
E(t;;)
=
E
PPiti;
1<i
(2.10)
ii',
Though, most of the work so far was based on discrete distributions, here
we used a continuous probability density function (PDF) for the speed in the
range (a, b) (a and b represent the minimum and maximum speeds), because it
provides analytical formulas and it is more realistic. So, assuming a uniform
PDF:
_
=
f(v)
1
-
E (a, b)
,
and replacing the summation by integration we have:
b
E(tii) =
b
dv f(v;)] dvi f(vj) ti,(vi,vj)
Our subsequent results are based on the following observations:
1. From:
ti
=
S
-+Fmax
11
0,--64i3
(2.11)
,
we see that t;, decomposes into the sum of two terms. The first term,
is sequence independent. So it contributes a constant amount to the total
busy time (TBT) and is thus of no interest to the optimization of TBT'.
The second term 6tis, is sequence dependent and, noting also that we can
express:
E(tii)
E -
=
+ E(6tii)
(S)
we have to center our efforts in minimizing the E(6ti), which is the
expected limit in improvement. Figure 2.1 shows the sample space for
the evaluation of E(6tg).
2. The sequence dependent term, Stig, is zero when the following aircraft,
j, is faster than its preceding aircraft i. Due to this fact only descending speed sequences contribute towards the sequence dependent cost and
should, therefore, be avoided.
3. The total sequence dependent cost - the sum of successive btij's - of any
y, where 1 and n
descending speed subsequence is proportional to I -
are the first and last aircraft of the subsequence. The intermediate terms
cancel as can be verified in the following example with vi > v 2
(1
-1
V2 /1
1_v1
UVn-1
+
1
2 1\/11 V4
V3
Vn-2)
(n
>
vn:
1
1
1
vs
11
> ...
V3
Vn-1)
1\
Vn
V1
(2.12)
From these observations we may deduce that the optimal sequence, in the
absence of MPS constraints, would contain at most one descending subsequence, descending from the first fixed aircraft (the previous landing) to the
8
However,
-
is important in the minimization of Total weighted delay.
b
vi > vj
ab
Figure 2.1: Sample space for calculating E (max
v
(0,
-))
SPEED
0 = AIRCRAFT
0
0
140
130
0
PREVIOUS LANDING
120
11000000
100000
900000
10
805
20
15
LANDING ORDER
Figure 2.2: The optimal sequence without MPS constraints.
slowest aircraft of the set. In other words, we have the freedom to rearrange
all the aircraft in ascending speed order but we are stuck with the initial fixed
aircraft - the previous landing - which will, with large probability, be faster
than the slowest aircraft in the set, thus imposing a descending subsequence
from the fixed aircraft to the slowest. One way to arrange such an optimal
sequence is by placing the slowest aircraft right next to the fixed one as shown
on figure 2.2.
The expected additional cost, t(tig), in this unconstrained case is the
expected cost of the descending subsequence, divided by the total number of
aircraft, and thus, we expect that it would be negligible for a large enough
aircraft set.
Now, in the presence of MPS constraints, we have to accept considerably
more descending speed subsequences, and we can resequence the aircraft in
order to minimize the total extra length (the sum of ' - ') of all the descending
SPEED
FCFS|
14
130
21
16
13
120
7
110
27
100
10
15
18
20
11
17
8
90
80
20
25
Landing
Order
25
Landing
Order
SPEED
MPS=4
14
130
S 4
1w4
120
110
2
100
13
16
21
19
7
8
69IR
1
18 20
15
17
90
80
5
SPEED
MPS=5
130
120
110
100
3
1l
21
13 F1116
19
10
4
2
18 20
15
7W11
8
17(
12
90
25
Landing
Order
Figure 2.3: Example of an FCFS arrival sequence with many descending speed
subsequences, as well as resequencings that contain one descending subsequence
for every 2MPS interval.
subsequences.
In a real situation we may use elaborate techniques to arrive at the optimal
sequence under the MPS constraints. We have, however, no way of computing
the expected improvement - i.e. the expected reduction in E(6tyj), under such
optimization. So, in order to arrive at some lower bound to this reduction, we
propose a suboptimal procedure, whose performance we can estimate based on
the following claim:
Given an FCFS sequence we can always rearrange the aircraft, in a
MPS feasible way, so that the new sequence will have at most one
descending subsequence for every 2MPS wide interval of the FCFS
sequence.
PROOF: At first note that figure 2.3 shows a typical FCFS sequence and
resequencings, for MPS values of 4 and 5. The resequencings are MPS
feasible rearrangements of the FCFS sequence, containing only one descending speed subsequence in every 2MPS long interval.
Then, to justify the claim, we recall the fact that within any MPS
long interval we have the liberty to rearrange the aircraft in any desirable way. Thus, we split the original sequence in MPS long intervals
M 1 , M 2 , M 3 ,... and then create descending sequences in the odd MPS
intervals and ascending sequences in the even intervals.
The maximum speed of the descending sequence in M 3 for example is the
maximum speed over M 2 and M 3 whereas its minimum is the minimum
speed over intervals M 3 and M 4 .
In general the descending subsequence in the odd interval 2i + 1 starts
with the maximum speed speed that has occured in the intervals 2i and
2i + 1 and terminates with the minimum speed that has occured in the
intervals 2+ 1 and 2i+ 2.
The above argument concludes the justification of the claim.
QED
Accepting the above claim, the sequence dependent E(6t*), resulting from
this rearrangement, equals the expected cost of a descending subsequence recurring every 2MPS - and using the expression 2.12 we have:
E(6t*)
=F
E (F,,.(2MPS)
- ,,-(2MPS)
M
2MPS
(2.13)
where vmn(2MPS) and v.,(2MPS) are the minimum and maximum speeds
that occur in a 2MPS wide FCFS interval (i.e. the minimum and maximum
speeds in 2MPS random arrivals).
2.3.5
Results and Conclusions
Though E(6bt,) is easy to evaluate analytically, the quantity E(&t*) requires
tedious calculations involving order statistics. So, we preferred to run a small
experiment instead, by generating 1000 random speeds, uniformly distributed
within a given range. The experiment was then repeated for a variety of realistic
speed ranges.
Figure 2.4 shows the variation of E(6t!,) as function of MPS for various
speed ranges, quoted at the tip of the corresponding curves. Note that E(3tig),
the FCFS value, is plotted at MPS = 0. The unit on the vertical axis is one
second.
Also, the table 2.5 presents a summary of the expected fixed and sequence
dependent parts - for the FCFS and MPS = 6 cases - of the mean service
time. The table also shows the sequence dependent parts, normalized by (i.e.
as percentage of) their respective fixed part, which is considerably greater.
The maximum MPS value used was 6 (i.e. 2MPS = 12) and the results
so far were positive in the sense that we can reduce the sequence dependent
E(t! (2MPS))
(seconds)
It
0
o
o
0
0
8
0
o (80 to 150)
o (9 to 180)
0
0
6
0
S
o (90 to 150)
0 (90 to 140)
o (100 to 150)
4
0 (110 to 150)
o
o, (1'
to 150)
2
2MPS
Figure 2.4: The variation of E(tM) as a function of MPS
FCFS
Speed
Range
(90,
(90,
(90,
(100,
(110,
(120,
(80,
140)
150)
180)
150)
150)
150)
150)
%
MPS = 6
% Improvement
E(S)
E(6t)
E(6t*)
of
of
seconds
95.44
91.95
83.18
87.58
83.74
80.33
96.98
seconds
11.56
12.83
15.69
9.84
7.20
4.98
16.82
seconds
4.97
5.43
6.93
4.08
3.08
2.15
7.33
E(6t*)
5.21
5.93
8.34
4.6
3.67
2.68
7.56
Capacity
6.92
8.09
10.54
6.64
4.91
3.52
9.80
12.13
14.02
18.88
11.24
8.58
6.20
17.35
Figure 2.5: Summary of results
part E(6t) by more than 50% consistently for all the speed ranges that we have
tried.
On the other hand the unoptimized E(6t) is only within 6% to 18% of the
expected sequence independent component, E(S/v), and very sensitive to the
movements of the lower limit of the speed range.
Next, the expected capacity y4 can be approximated by
1
y=
E ()+
E(6t)
and in percentage terms:
Spl
E(6t) - E(6t*)
E(s) + E(t)
i.e. the percentage capacity improvement, y, equals to the percentage decrease
in mean service time, which lies in the range of 4% to 10% for MPS = 6.
So in conclusion we can make the following three points:
1. Though in the order of 10%, the -increase in capacity (service rate) can
bring about a more substantial reduction in the average delay experienced by aircraft, especially in case of congestion when the arrival rate,
A, reaches or temporarily exceeds the service rate p.
In order to appreciate the effects of the percentage reduction in mean
service time on the percentage reduction in average delay D we quote the
queue theoretical result:
A
D
i- A
Again in percentage terms gives:
6D
D
y
_p
-A
y
--
-
p
=(2.14)
=
p-A
E(6t) - E(6t*)
E ()
+ E(t)
and we note that the percentage reduction in average delay is proportional to the percentage reduction in service time multiplied by a factor
(y - A)~1 . In the case of congestion A tends to y and this factor blows up.
Consequently the delay, in periods of congestion, is extremely sensitive
to the variation of the mean service time.
2. The optimization is more useful in the presence of a wide speed range
and is sensitive to the lowest speed.
3. Since the means of the maximum and minimum speeds in 2MPS random
aircraft arrivals converge fast to the limits of the speed range we can
see that the nominator of the fraction in equation 2.13 soon approaches
a constant value. Thus E(6t*) behaves like the function 1/x, which is
asymptotic to the horizontal axis. Consequently we should choose some
MPS value such that
1
is close enough to zero. How close will be
determined from other restrictions on the value of MPS.
2.4
Literature Review
So far the literature on terminal area congestion management can be divided in research papers suggesting metering and flow control and in papers
suggesting more sophisticated runway scheduling procedures. The former papers are expertly covered by Dear and Pararas ([12],[27]).
The latter range
from papers that directly relate to the runway scheduling to papers that cover
neighboring areas and general solution methods. In beginning our discussion it
is instructive to first identify the general methods and point out their relative
merits. We will then move on to describe the work done on the problem and
its affiliates.
In order to appreciate the suitability of the various solution procedures
we should recall two special circumstances in runway scheduling.
At first,
runway scheduling is destined for a real time application and this makes the
speed of computation a critical factor in the choice of a solution procedure.
Combinatorial problems are notorious for the amount of computer time they
require. However the algorithm we choose must run faster than the aircraft
can operate in the optimal schedule or else there is no point in optimizing.
The second delicate point in ATC is the emergency situations that arise and
demand that certain aircraft be given a special treatment. So the controller
may have to impose additional arbitrary constraints on the schedule. Other
peculiarities of runway scheduling will reveal themselves along the way.
Broadly speaking there are four general categories of solution methods
based on heuristics, mixed integer and linear programming (MILP), branch
and bound (BB) and dynamic programming (DP).
In general heuristics are fast, speed being their most profound merit in
our real time environment, but lack in the quality of their solutions. Furthermore the special circumstances of the runway scheduling limit our possibilities.
Heuristics, other than Dears' "tail optimization" - to be discussed shortlywould have difficulty handling a weighted delay objective and arbitrary additional schedule constraints. It should be also emphasized that total weighted
delay, which is our ultimate objective relating to fuel economy, passenger comfort and safety, is very sensitive to reductions in the total time of a schedule
and this adds incentive for exact algorithms.
V
The MILP based solutions have their shortcomings too and they have a
reputation of being too general to usefully exploit the weakness of the specific
problems. Although recent research on the TSP produced impressive results as
for example Crowder and Padberg [11] solved a 318 node TSP in 50 minutes
using cutting planes and polyhedral theory and Balas and Christofides [3] solved
large asymmetric TSPs using a relaxation scheme for the subtour elimination
constraints of the TSP, MILP is still not a promising approach. Its weakness
lies mainly in that increasing the number of constraints increases the size of
the problem and this is exactly the point that enumeration procedures thrive
on.
Similarly with the heuristics, MILP would have difficulty accepting a weighted
delay objective function, the controllers intervention and the inclusion of takeoffs would require additional constraints whose number and form are not known
ahead of time. In other words the fact that takeoffs can be accommodated between successive landings without stretching their time interval would require
constraints, similar to subtour elimination constraints, that come about in the
course of computation. If these constraints are ignored then we will possibly
end up with landings spaced dangerously close.
So we are left with BB and DP which are related by the fact that they
search the same space and quoting Rinnooy Kan [34]: "...
DP is typically
faster than BB but only at the expense of more complicated bookkeeping".
The power of BB lies in the fact that it can usefully exploit the additional con-
straints and that it always has an improved solution available to us with very
minimal requirements in computer memory. It is extremely flexible in terms
of implementing additional constraints on the schedule and can accommodate
a weighted delay objective function. Its weakness lies in its worst case performance where it may take an amount of time proportional to n! (n being the
number of aircraft to be scheduled) to arrive at the optimal solution.
* On the other hand DP has almost all the advantages of BB and an exponential though much smaller solution time of order n22". The weakness of the
DP approach, at least in its present form, lies in the difficulty of adequately exploiting the position shifting (MPS) constraints and the difficulty in including
the takeoffs and multiple runways.
Finally the inspiration for deriving the algorithm in this thesis is to a great
extend attributed to the ideas about permutations with forbidden positions
found in the "Introduction To Combinatorial Mathematics" by C. L. Liu.
Next we look at the work that most directly relates to runway scheduling.
This is mainly comprised by the Ph.D thesis of Dear and Psaraftis both carried
out at the Flight Transportation Laboratory, MIT. Dear in his Ph.D thesis
[12] was first to confirm that we can substantially improve on the terminal
area delays by means of rescheduling.
He corrected the calculation of the
least time separations between successive landings on the same runway, which
are functions of the weight and approach speed of the separated aircraft and
introduced the Maximum Position Shift (MPS) constraints, which limit the
position of any aircraft in the optimal sequence within a prespecified number
of displacements away from its positions in the first-come-first-serve (FCFS)
sequence.
The rationale of these constraints is to both limit the computational load
and eliminate the possibility that a given aircraft is indefinitely delayed. As
we shall see these constraints are the central theme to this thesis because of
the special computational structure that they impose on the runway scheduling
problem.
The Ph.D. thesis of Psaraftis [31] was the follow up to Dear's work. Psaraftis
traced the theoretical origins of runway scheduling to the Traveling Salesman
problem and employed Dynamic Programming (DP) techniques for its solution.
He also discretized the approach speed range into a limited number m of speed
classes.
The immediate consequence of this discretization is that the time
separation matrix is now a constant m x m array instead of an ever changing
n x n array, where n is the number of aircraft to be scheduled. The interesting
implication of this reduction is that the computational size of the problem is
now exponential in the number of speed-classes and polynomial in the number
of aircraft to be scheduled. Note, however, that the merit of this discretization
is diminished when we consider a traffic mix containing the three different
weight categories. So the number of composite - speed and weight- classes has
tripled.
Finally Psaraftis took advantage of the MPS constraints by disregarding states
which were not feasible and thus saving on computation. This reminds one
of the Branch and Bound (BB) fathoming procedure. A similar hybrid DP
and BB procedure was used by Barnes
14]
in an almost isomorphic problem
of job scheduling with sequence dependent setup costs. Barnes has interesting
ideas for making efficient fathoming. He uses, for instance, a fast heuristic to
estimate a lower bound on the cost of the remaining stages and thus effectively
limits the amount of tree search. Such an idea was employed by the author in
earlier experimentation with BB procedures, and is an option available to the
algorithm developed in this thesis, though perhaps less important because of
its computational overhead.
Psaraftis also gave a comprehensive account of the existing literature on the
TSP and the complexity of the exact and heuristic procedures and noted some
special difficulties in the runway scheduling problem that render the heuristic
procedures inappropriate. For instance the heuristics for the TSP on the plane
can not deal with the earliest time constraints - since the aircraft are not all
available to land at the time of the optimization, neither can they accommodate
the average weighted delay objective function.
Dear, Psaraftis and most of the other researchers on the topic addressed a
limited problem of scheduling landings only on a single runway. So the issues
of mixed takeoffs and landings as well as the scheduling of multiple runways
have not been dealt with.
Finally Pararas [27] tries to analyze the automation of the terminal area Air
Traffic Control system by breaking down the associated decision process into
a number of automation functions. Of these functions the runway scheduling
and flight plan generation are identified to contain the greatest potential for
providing efficiency to the terminal area operations. Pararas gives an exhaustive list of the specifications of the automated system and underlines that the
man-machine relationship should be of the master-slave type. In other words
the human controller should be able to impose additional constraints - requests
on the schedule.
Before continuing with related problems we should mention an interesting ATC development from W. Germany. Volckers in [40] titled: "Computer
Assisted Arrival Sequencing And Scheduling With The COMPAS-System" describes an effort, similar to that undertaken at the Flight Transportation Laboratory - MIT, designed to improve the terminal area ATC for the Frankfurt
airport. The report acknowledges the necessity of runway scheduling and the
inherent difficulty of the problem. However it uses a rather simple BB based
heuristic for the scheduling of landings only. Furthermore it is not clear how
the least aircraft separation times are computed.
Broadening now our focus on the literature of combinatorial problems we
can identify the areas of machine scheduling and vehicle routing as the immediate relatives of runway scheduling.
The literature on machine scheduling is quite extensive and contains interesting results which however lack one essential feature: the sequence dependence of the cost function (see survey papers [17] [13] [21]). Furthermore the
great majority of these problems have constraints that are irrelevant to runway
scheduling - for instance, there may be precedence constraints or jobs which
require processing on more than one machines - and are static. That means
that the set of jobs to be scheduled is known in advance and the entire set of
jobs is available for immediate processing. This is in contrast with the runway
scheduling where the environment is a dynamic one, thus requiring schedule
updates as soon as information about new arriving aircraft becomes available
to the ATC, and the aircraft are not all immediately available.
Another important difference between machine and runway scheduling is
that the ATC controller has to be able to interact with the schedule and specify
additional arbitrary constraints on the schedule. Such interaction is not of any
concern for machine scheduling problems. It helps, however, together with
the other differences, render the machine scheduling solution procedures (see
Rinnooy Kan [34]), especially the heuristics ones, inappropriate.
So exciting new developments in that area are not directly applicable to the
runway scheduling problem. Most of the textbooks that deal with sequence
dependent machine scheduling ([9] [34] [14]), in support to our own conclusion,
suggest the dynamic programming (DP) and branch and bound (BB) options.
It is worth noting that the sequence dependent machine scheduling problem
with identical machines and each job requiring treatment on only one machine
is almost identical to the the vehicle routing problem and in fact various researchers ([30]) in machine scheduling have borrowed ideas form vehicle routing.
Evidently vehicle routing, especially with time window constraints, is a
closer relative to the runway scheduling problem and a good account of work
on this field is given by Golden [16] and Bodin [8].
In drawing the analogy between the runway scheduling and the vehicle
routing we note that the vehicles correspond to the runways, the customerlocations to the aircraft, and the distances to the least time separations between
successive operations of aircraft on the runways. Further on, the time-window
represents the time period in which a customer can accept his delivery and has
its analog in the period defined by the earliest and latest possible times that
an aircraft can operate on the runways. Recent work on the vehicle routing
with time window constraints has been carried out by Psaraftis [32], Sexton
[36], Solomon [38], Savelsberg [35] and Kolen et al. [20].
We should however contrast the two problems by pointing out their divergence. For instance the vehicle capacity is an additional concern that has
no analog in runway scheduling. On the other hand the facts that the timeseparation matrix is asymmetric and that certain interlanding intervals are
large enough to accommodate takeoffs without stretching complicate matters
for the runway scheduling.
In concluding this section we want to express the optimistic opinion that
the algorithm developed in this thesis will have potential uses in the general
TSP area as well as the vehicle routing and machine scheduling problems.
Appendix
2.5
Horizontal Separations
2.5.1
This appendix presents brief examples of IFR (Instrument Flight Rules)
separations between operating aircraft as described in the ATC Hand-book.
For more information see reference [3].
The convention for table-entries is
(LEADINGFOLLOWING). L and D will stand for landing and departure respectively.
The US Classification of aircraft according to Maximum Gross Takeoff
Weight (MGTOW) is the following:
Heavy
Large
Small
>
<
<
: MGTOW
: 12,500 lbs.
: MGTOW
300,000 lbs.
MGTOW
12,500 lbs.
<
300,000 lbs.
L-L Final Approach Separations (nautical miles):
L-L Separations in nm
Heavy
Large
Small
Heavy
4
5
6
Large
3
3
4
Small
3
3
3
-
D-D Separations when initial departure courses diverge more than 150 and
-
radar identification can be established within 1nm of the runway:
D-D Separations
-
Heavy
Large
Small
Heavy
Large
Small
2 min- 2 min- 2 minutes or utes or utes or
5 nm
5 nm
4 nm
1 nm
1 nm
1 nm
1 nm
1 nm
1 nm
D-L Separation is defined as 2 nm if the separation will increase to 3 nm within
1 minute after takeoff. This rule is a little hazy and its interpretation
remains with the controller (see above reference for more explanations).
L-D Separation is not explicitly defined. The rule is that a takeoff can be
initiated when the runway is clear. So for our purposes we will assume
the corresponding time separation to be equal to the runway occupancy
time of the landing aircraft.
Chapter 3
Combinatorial Concepts
We begin with our definition of a permutation.
Permutation 7rs is a vector whose i th component, rf, is the object
chosen for the i th position in a sequence of the set of objects S.
and:
"by combinatorial search we mean the search for the best permutation irs of a set of objects S "
For convenience we number the objects with numbers 1,2,...
,n
(= ISI).
A permutation of objects is synonymous with sequence, or ordering, or
arrangement of objects, and since we often use graphs, whose nodes represent
the objects of S and whose arcs represent the allowable successions of the
objects, a path on a graph defines a permutation of the set of nodes it traverses.
Thus the words permutation, sequence, ordering and path will often be used
interchangeably.
The size of a combinatorial search is defined as the number of elementary
operations - additions, multiplications, number comparisons etc. - that have
to be executed in order to complete the search. As one might expect the size
of the search, often called the size of computation, is a function of the size of
the problem. Problem size is defined as the number, n, of objects in the set S.
The size of the search translates into the use of proportional amount of
computer resources and time, and a practical goal of this research is to devise a
search procedure with minimum such requirements. Typically for combinatorial
problems the size of the search is an exponential function of n.
Search procedures which seek optimal answers are called exact procedures
as opposed to heuristic procedures, which compromise the optimality of the
solution, frequently with some reasonable bound on their worst performance,
in order to reduce the time of the search to some polynomial function of the
problem size.
The Parallel Processor presented in the next chapter is an exact procedure
(with certain exceptions to be discussed later on) for implementing combinatorial search which takes advantage of the special restrictions that we impose
on the set of permissible permutations. These restrictions, known as the MPS
constraints (discussed in the chapter 2), admit for consideration only those permutations in which objects are moved a maximum number of positions, MPS,
away from their position in some prespecified base permutation. In the context
of runway scheduling, the First-Come-First-Served (FCFS) sequence of operations is a natural choice for the permutation upon which the MPS constraints
should be based.
Recall that the MPS constraints are necessary to avoid situations where
optimal schedules might require some aircraft to incur unacceptably high delays. In effect they provide a bound on the inequity in the delay distribution
experienced by various users.
The objects in the set are named after their position (order) in the FCFS sequence. This facilitates the verification of compliance since now we can express
the MPS constraints as
7r
- i| < MPS
The MPS constraints give a special structure to the MPS feasible solution
space, comprised by all the MPS feasible permutations. In fact they create an
MPS "neighborhood of permutations" around the FCFS permutation and allow
the parallel processor to find the optimal permutation in time which is linear
to the number of objects and exponential to the value of MPS.
The parallel processor algorithm (PP) will be seen in chapter 4 to be equivalent to a special version of dynamic programming tailored to fit effectively the
lIPS constraints and allow the extensions of the problem to include takeoffs
and multiple runways. Takeoffs require special treatment because (see previous chapter) a number of them may be inserted between consecutive landings
without stretching the inter landing interval. The multiple runways case also
requires modifications of the solution space.
In the following presentation we start with the description of underlying
concepts such as the permutation tree, the combination graph, the cost of a
permutation and the restriction on the choice of cost functions imposed by the
sequential evaluation of permutations. We continue with a short discussion of
straight forward combinatorial search on the permutation tree using fathoming
techniques, followed by a section on the state-stage (SS-) graph. The latter
is a concise representation of the PT for a given cost structure, that allows
for stage-wise efficient computational procedures, generally known as dynamic
programming.
Next we examine the MPS tree, defined as the permutation tree pruned by
the MPS constraints. We attempt to calculate the size of complete enumeration
on it which is found to be an exponential function of the number of objects.
This result disqualifies the use of enumerative tree search procedures. We thus
examine the MPS-graph, defined as the SS-graph for the MPS-tree. To assist
in the calculation of the MPS-tree size we have defined a bipartite graph (Bgraph) representation of the subtrees of the MPS-tree. The B-graphs are also
useful in visualizing some characteristic, stage invariant patterns on the MPS-
DEPTH:
0
1
2
3
32
*r
TREE:
,
E2
31
3
Figure 3.1: Permutation tree of the set {1,2, 3}
tree, which can then be characterized as states of the combination graph and
ultimately help us organize the computation, using parallel processing.
The exposition of the underlying concepts will conclude with the chessboard
representation of the B-graphs which makes evident a labeling scheme for them
that contributes to the parallel processor implementation. A full description of
this algorithm, including the data structures and coding schemes that support
it, as well as an illustration of its workings, will be presented in the next chapter.
3.1
The Permutation Tree
The first basic combinatorial concept that we use is the concept of the
permutation tree (PT) illustrated in figure 3.1. The PT is a representation of
the state space which provides a visualization of all possible permutations of a
set S of n objects.
The PT consists of:
nodes, drawn as boxes, which represent the object selected at each stage or
depth.
arcs or branches that connect the nodes and represent the permissible selections from each node on the tree.
In order to facilitate our discussion, we will give some additional widely used
definitions:
Root of the tree is the special node
o
e
on the left. In a practical situation the
root represents an object already chosen (for example the aircraft that
just landed or the home of a traveling salesman).
Path is a sequence of arcs connecting two nodes on the PT. We want to
further distinguish between partial and complete paths. By partial path
we mean a path from the root to some intermediate node of a tree. By
complete path we mean a path from the root to a leaf.
Parent (or father) of node x is the node immediately before x
Son of node z is a node immediately after x
Ancestors of node z are all the nodes along the partial path from to x. The
set of objects consisting of z and its ancestors is called the selected set,
S.. The partial path from the root to x is unique and represents a
permutation ir'" of the set S..
Note that we distinguish between the node z and the object selected at
z since that object can be selected at a number of other nodes of the
PT.
Descendants of z are the nodes that can be reached from x on the path from
z to any leaf. The set of objects selected as the descendants of x form
the complement set S. of the set S., i.e.
S = Sz U S:
There are n, = IS,| objects in the descendants of any node x.
Subtree PT, of x is the smaller tree rooted at x and whose subsequent nodes
contain elements of Si.
Leaf is a node that has no descendants, i.e. a node at stage n.
Depth or stage d, of node z is the number of ancestors of x. It is also the
position of an object in the permutation and d + n, = n =
ISI.
Branching is the selection of arcs out of any node (i.e. a searching motion
away from the root). At any node x there are branches to each member
of S|.
Backtracking is a searching motion back towards the root.
Since in searching down the tree on any path we select distinct objects, the
maximum depth of the tree is given by |S|. Any path from the root to a leaf
represents a permutation ?r5 .
The size of the tree, defined as the number of leafs, or, the number of the
corresponding permutations, is n! (where n = ISI) since, from any node at level
d we can branch to n - d remaining objects and so starting from the root we
have n choices, times n - 1 choices at every node at d = 1, times n - 2 choices
at every node at d = 2, and so on, until we run out of objects at d = n. There
are n! leafs and paths and permutations in the PT.
3.2
The Combination Graph
As paths in the PT reach certain nodes, they will have selected the same
set of objects S., although :Cot in the same order of selection. S, is called a
"combination" of the objects of S. If we let a node or state represent a unique
combination of objects the nodes of the FT will be "clustered" into one node to
form a "Combination Graph" (CG). This greatly reduces the number of nodes
Stage:
No of states:
1
3
2
3
{1}
{1,2}
{2}
{1,3}
{3}
{2,3}
0
1
{}
3
1
{1,2,3}
Figure 3.2: The Combination Graph for the set {1, 2, 3}
from the PT since at any stage d, the number of combinations is
n - d
The combination graph for the set S = {1,2, 3} is shown in figure 3.2 and may
be compared to the PT for the same set in figure 3.1. Any path from the root
to a node x in CG represents a permutation which constructs the combination
represented by node x. The partial paths marked with an "*" in figure 3.1 can
be found as paths from the root to node {1, 2} marked with an "*" in figure 3.2.
The node set of the CG is also known as the powerset of S, i.e. the set of
subsets of S, and its cardinality is given by the stage-wise summation
Ed=
In section 3.5 we will generalize this graph to a State Stage graph (SS-graph)
where a node represents a more general definition of state.
3.3
The Cost Of A Permutation
The purpose of this section is to underline aspects of form of the cost function for the combinatorial search. Specific cost functions that pertain to runway
scheduling are treated in detail in the previous chapter. Recalling that 'rX is
a permutation of the objects of a set X, the cost function c(irX) maps the set
of permutations onto the real numbers and thus induces a preference order on
the set of permutations.
Let us also define the arc cost cii as the cost of selecting two objects i and
j in succession. The arc cost will typically represent the least time separation
between successive aircraft, the distance between two cities etc.
Having defined the arc costs we can define a variety of cost functions of a
permutation, the simplest of which is the sum of the arc costs that appear in
it. This type of cost function corresponds to the total operating time or total
length of a traveling salesman's tour and can be expressed as
n
L(2rXl)
cg
=
X
i=1
where 7r6 represents the root of the tree.
There are however other classes of cost functions that can be used. These
classes have to obey the following separability requirement
c(r Xl 11 7rx 2 )
=
c(lrXl)
*
c( 7 rx2;7r l)
where
X1 and X 2 are disjoint subsets of S
"0" is an arithmetic operation like addition or multiplication
"j1" stands for concatenation
This separability requirement is a consequence of the sequential evaluation
of the permutations in a tree search where we do not want the cost of a partial
permutation - path up to a given node in the PT - to depend on the arc costs of
its descendants. The second cost function on the RHS of the previous equation
will however have a dependence on lrXI, parametrically, for the following two
reasons:
1. the last item of rXI is the root of the remaining subtree thus affecting
the value of c(7rX2) and
2. some cost functions may have a more involved dependence on 7
.
A relevant example of such cost function is the total weighted delay
(TWD). In TWD each aircraft makes a delay contribution equal to the
interval from the time the aircraft becomes available to operate until
its time of operation, weighted by some number' reflecting its relative
importance.
The TWD is a valid cost function for our purposes and adds to the merits
of an enumeration method. In chapter 2 we showed how to evaluate c(rXlI)
and how the c(rX2; Xxi) is parametrically dependent on 7ir.
More precisely
the quantity
c(rXl)
=
zg
1
D
according to the notation of section 2.2.7, and the quantity c(7rx2) is dependent
on L(rXlI), calculated recursively by the time evaluation rule.
3.4
Combinatorial Search
Having defined the PT and the cost of a permutation we can now focus
on ways of searching the PT. The first and straight forward systematic way to
search the PT is that of a Depth First Search (DFS) in which we start from
the root and pursue some path updating our tentative cost function c(lrsz) on
every branching until we reach a leaf zi where Sx = S. There we create a
record, called current best, saving this permutation and its cost. We subsequently backtrack from x and continue the search of the tree in some manner
updating the current best record as better permutations are found. Thus we
'The weight may reflect number of passengers, fuel consumption, VIP priorities etc.
will, eventually, exhaust all possible permutations and the current best record
will contain the optimal permutation and cost.
One could use other well known ways of transversing the PT such as best
first search (BFS), where we keep a list of active nodes in the PT and we branch
out of most the promising one. Alternatively, we could use BFS, breadth first
search or DFS as branching strategies for the search. The DFS and BFS type
of'search can be shortened by using bounding strategies, which will stop the
search at node z, if the cost of the partial path leading up to x plus a lower
bound on the cost of the best path in the remaining subtree rooted at x exceed
the current best value.
Combinatorial search, in contrast to Integer Linear Programming methods,
benefits from the existence of additional constraints on the problem such as
the MPS constraints, or the time window constraints, or additional constraints
on the position of certain objects. In runway scheduling, for instance, if when
arriving at node x the total time of the permutation so far is less than the
earliest possible time for the selected aircraft, or, if the total time at x is
greater than the latest possible operating time of any of the remaining aircraft,
then we can abandon the search in the subtree of x and backtrack. So, for a
modest increase in bounding effort we may save on computation by exploring
by far fewer permutations than otherwise.
Finally we note that the performance of the combinatorial search on the
PT has as its worst possible bound, n!, which is the number of leafs of the PT
and hence the size of complete enumeration.
3.5
The State-Stage (SS) Graph reduction of
the Permutation Tree
As shown by the Combinatorial Graph (CG) the PT can be significantly
reduced if the state of the search space can be defined simply as the combination
of objects selected at stage d. However, if the cost structure is given by the arc
costs, c,,, then the cost of selecting the next object depends on the last object
selected. The state of the search space must be described by the pair (1,S.)
where I is the last object selected and S, is the set or combination of objects
selected at node x in the PT.
V
This is still effective in reducing the size of the search space. Consider
figures 3.3 and 3.4. Figure 3.3 shows the first three levels of the PT for the set
{1,2, 3,4}. At the third stage, there are 24 nodes, which can be clustered into
four combinations of object sets. Those 6 nodes marked with an "*" all have
the the same object set {1, 2, 4} and two of these belong to state (1, {1,2,4}),
two belong to state (2, {1,2, 4}) and two belong to state (4, {1, 2, 4}). Figure 3.4
shows the corresponding SS-graph where there are only 12 states in the third
stage. The number of states at any stage d is given by d
number of objects selected objects and
4)
(n)
where d is the
is the number of combinations
of selected objects at stage d.
Furthermore, the nodes at any stage d can be grouped into a "superstate"
where the combination of selected objects is common to all nodes in the group.
There are 6 superstates at stage 2 and only 4 at stage 3 in figure 3.4. The
branching from a superstate is common to all nodes in the group since S. is
common. This is shown in figure 3.4 for the branching from stage 2 to stage
3. The cost, Ci,,, however, depends from which state in the superstate the
branching occurs. Given we know the cheapest cost to reach any state of the
SS-graph at stage d and the cost of branching to a state to the next stage d+ 1,
a comparison among all the states of a superstate (and only those states) will
produce the cheapest cost of a state at the next stage. This will be explained
more fully in the next chapter on the parallel processor algorithm.
4
2*
3
4*
2
3
4
3
2
4
3
2
4
4
2
2*
3
4
2
3
3
xg<2
Figure 3.3: The first three levels of a PT of the set {1, 2,3,4}.
The
nodes marked with "*" correspond to permutations of the same object set
S,
= {1, 2, 4}
We can compute the number of states in the SS-graph by the following
summation over all stages d,
d=0
This summation2 can be shown to be n2".
, The superstates correspond, of course, to the states of the combination
graph and so there are 2" of them.
This state description may be inadequate in case where the triangular inequality is not satisfied by the cy values. In runway scheduling this will be the
case whenever takeoffs are inserted between landings. This situation will be
confronted in chapter 5 where we will see that the parallel processor algorithm
can deal with it optimally.
MPS-Tree, MPS-Graph And MPS-CombinationGraph
3.6
The MPS constraints cause a significant reduction in the size of the PT,
creating another tree structure which we shall call the MPS-Tree.
Clearly
the MPS-tree is a subset of the PT and figure 3.5 shows the MPS-Tree, for
MPS = 1, contained in the PT of figure 3.3. The MPS-Tree can be viewed
2
To compute this summation we use the binomial theorem:
n)
zd(
(I+z)" =
d=0
at z = 1
= 2"
d=O
Then:
d
d=1
=
1+d )"
=
dzd
d=O
x=1
d=O
x=1
= n2n-
l_
(,,0)
(4,{1,4}))(,1
(2,{2,3})
2,4}))
(1,{1,3,4})
(3j~j)(3,{f2,3}1)
(3,{1,3,4})
(2,{2,4})
(4,{1,3,4})
(4,{2,4)
(2,{2,3,41)
(3,{2,3,4}))
4,{4(3,{3,4}
(4,{3,4}1
(4,{(2, 3,4)
Figure 3.4: The first three stages of the State-Stage Graph corresponding to
the PT of the set of {1, 2, 3, 4}
Figure 3.5: The MPS-Tree for S = {1, 2, 3, 4} and MPS = 1
as the "MPS-neighborhood" of a base permutation which, in the case of runway scheduling, is the permutation defined by the First Come First Served
discipline.
The MPS-tree folds back onto itself to produce the "MPS-graph" which
is naturally a subset of the SS-graph.
In turn the MPS-graph can be fur-
ther condensed into a MPS-combination-graph (MPS-CG) which is a subset of
the combination graph introduced earlier. This condensation can be done by
grouping the states (1,S.) in the MPS-graph into superstate S, in the MPS-CG
as we will see in chapter 4. Notice, however, that searching the MPS-graph
is equivalent to a limited search of the SS-graph and MPS is the parameter
controlling the limits. By letting MPS = n we search the full SS-graph.
In this section we will seek to characterize and count the superstates S..
This characterization will reveal some stage invariant features for S., in terms
of its complement set S', (s.t. S'
U S,
= S ), which will allow us to do the
counting on every stage. Specifically we will discover a "generalized type" of
available object set, S, composition that will be initially denoted by a capital
letter (B, C, D, E, G,.. .). In the last section we will replace the capital letter
by a unique vector, v, which both labels the type and allows us to recover the
actual composition of the available object set of a given type given the stage.
The generalized types will also allow us to calculate the size of the MPStree by looking at how the available object set S. evolves in an MPS feasible
way in the nodes of the MPS-tree. As can be seen in figure 3.5 the MPS-Tree
has many fewer nodes than the PT depending of course on the value of MPS.
However, we will presently show that its size (i.e. the number of MPS feasible
permutations) still grows exponentially with the number of objects. The fact
that the size of the MPS-Tree is also the performance bound of elementary
search procedures, motivates the use of search procedures on the SS-graph of
the MPS-tree whose bounds turn out to be linear in the number of objects and
POSITIONS
1
W
... n
1 2 3 4 5 ...
G1
'n
=OBJECTS
Figure 3.6: The Gi graph for MPS = 1
POSITIONS
1
2 3f
124
G
5 ... n
5
..
n
OBJECTS
-n
Figure 3.7: The Gn graph for MPS = 2
which turn out to have a stage invariant structure.
3.6.1
The Bipartite Graph Representation Of The Subtrees In The MPS-Tree
In order to both calculate the size of the MPS-Tree and visualize its stage
invariant features, it is useful to introduce a bipartite graph (B-graph) representation (figures 3.6 and 3.7) for every subtree in the MPS-Tree. The B-graph
illustrates all the possible object-position assignments in the subtree. Typically,
the subtree rooted at node x of the MPS-Tree - in depth d, with n, available
objects - has its own B-graph:
X"PS
=
(Sn,3 P.,En,)
where
X is a member of set of graph types tentatively denoted by {B, C, D, E, F,G,...}
S'
is the set of remaining objects and is the complement of S, the set of
objects selected so far.
P.
is the set of remaining positions, and
E is the set of admissible edges corresponding to the MPS feasible assignments of objects to positions. So there is an edge connecting an object
and a position if the absolute difference between the object and position
numbers (i.e.
bre -
ij) is less than, or equal to MPS.
At first note that the B-graph is a functional representation of the superstate S. because it allows us to find the optimal permutation in the remaining
available object set S.. This is possible because, as we will soon see, a permutation ir' 9 , in the subtree of node z, corresponds to a bipartite matching in the
respective B-graph. So the expressions "B-graph type", "superstate type" and
"generalized composition of an available object set" are equivalent.
Figures 3.6 and 3.7 show B-graphs for the root nodes of the MPS-Trees for
MPS = 1 and MPS = 2. The convention is that the top row contains the
positions and the bottom row the objects. This "G" type of a graph is given
the symbol GM/"
and is the normal or standard type of B-graph. Figures 3.8
and 3.9 show other types of graphs for other nodes in the MPS-Tree that will
be presently discussed.
-Figures
3.8 and 3.9 show a portion of the MPS-Tree with its nodes re-
placed by their corresponding B-graphs. A branching from a node to one of
its descendants on the MPS-Tree reflects the assignment of an object to the
smallest available position and is shown as an edge, drawn in with double lines
and an arrow-head. The descendant graph is created by deleting the assigned
object and position, and their edges.
Starting at the root of a tree (or subtree) with a graph G,/PS and selecting
the first object will produce MPS +1 descendant graphs at stage 1, since there
123(45s...
nbject3
[
toposition
G
1XXX[
1 2 3 4 5 ...(Assign object 1 to position 1)
Gn
14
2 5...1
2
1...
2
3 4 5
4 5
... n
(Assign object 2 to position 2)
1n-2
1 25 34 .. n2
(Assign object 2 to position 1)
3
4
5a
...
n
(Assign object 1 to position 2)
Figure 3.8: The first 2 levels in the evolution of G1 with MPS = 1
are MPS + 1 candidates for the first position. These descendant graphs will
all have the subscript n - 1, but they will be distinguished by their missing
object. Continuing to the second generation, each of the graphs will also give
more descendants with subscript n - 2, and so on.
In order to explore this process, we shall trace the evolution on the MPSTree for MPS=1 (figure 3.8).
At stage 0 there are two possible candidates
for the first position. The assignment of object 1 to position 1 then results
in the subgraph G_ 1 , with n - 1 remaining positions, resembling its parent
except that its first position and object are removed. However, the assignment
of object 2 to position 1 produces a B 1 type graph, B,_1 , which differs from
its parent graph. It resembles a G1-
2
B-graph with a tail attached to it, where
objects 1 's edge competes for position 2. However since object 1 has no other
choice but position 2, due to the MPS=1 constraint, there is only one possible
G _1
2
1
3
1
5n
4 5...n
f.
n-1
Gn-2
Figu 2 3..
G
1
2
3 4
1
2
3
n
1...
1
n
2
1 2
..
5
3
..
4 wt...
3
n
5
n
...
B 2a
1C23
1
5____________
1
n
52.
1
2
3
5-..
Bi2
1
1~~
2
3
4
5
2 3 ..n4
..
5 ..
G_
C2
~~~1
2
3
4
5
...
4
5n
Figure 3.9: The initial evolution of G2 with MPS = 2
n
assignment, namely of object 1 to position 2, and therefore B'_ is degenerate
having only one possible descendant G'_.
In general, B' type graph necessarily evolves into G1 type whereas a G'
type graph evolves into both G and B' graph types. Repetitive application of
this logic produces the remaining of the tree in figure 3.8.
Similarly, we can trace the evolution for MPS = 2. The situation is now
slightly more complex. Starting from Gn at the root, the immediate descendant
graphs are of the following three types, which differ by the choice of object for
the first available position:
" G2 , the parent type, by choosing object 1
" B 2 by choosing object 2.
* C 2 by choosing object 3.
These descendant graphs are indexed by n - 1, having one less object than
their parent.
Now since the first descendant is of the parent type G2 , it will evolve like its
parent. Thus we only pursue the evolution of graphs Bn_1 and Cn_1. Starting
from Bn_1 the assignment of object 1 for position 2 results again in a G2 type
graph. The assignment of object 3 for position 2 produces a graph of type D 2
(not shown in the figure) with a trivial tail, where objects 1 and 4 compete for
position 3. As before, since object 1 has no choice but position 2, this reduces
to G 2.
In similar fashion assigning object 4 to position 2 will produce a
degenerate graph of type E which can develop only into a Bn_3 graph.
Finally there is yet another type of degenerate graph, F-
2
(also omitted)
which occurs in the evolution of the type Ci_1 graph. This graph results
from assigning object 4 to position 2.
As a consequence now object 1 has
only position 3 available to it and so F, 2 develops into a Dn_3 , which in turn
develops into a G,...., since object 2 has only position 4 available to it, as shown
in figure 3.9.
Clearly, as we saw in the foregoing examples, the composition of S"'. is
determined by the history of the selection, i.e. the path that leads from the
root to node x and establishes the structure of the graph at a given node. So, at
a given depth we need to further differentiate between these types of B-graphs
according to the composition of their object set and, for the time being, we
used capital letters B, C, D, E, F,G to distinguish them. Figures 3.8 and 3.9
have the graphs labeled by their corresponding type.
3.6.2
Computing The Size Of The MPS-Tree. The Number Of Matchings In A B-Graph
We begin with the observation that there is a one to one correspondence
between matchings3 in G,,. and MPS feasible paths in the subtree rooted at
x. So the number, g,,, of matchings in G., equals the number of such paths
which, in turn, is the size of the PT. So it suffices 'to compute g,.. We also
drop the superscript from the graph types if there is no ambiguity.
Note that the number of matchings in G,, is equal to the sum of the numbers of matchings in its first generation descendants. This is true because it
is equivalent to claiming that the number of leafs in a tree is equal to the
sum of leafs of its subtrees at depth 1. The number of matchings in G, can
thus-be calculated by recursively applying this argument to each of its descendants. This looks like a formidable task if one has to explore the entire feasible
space, which is anticipated to be an exponential function of n. However, as
we have just seen, the graph types belong to a limited set and for small MPS,
tractable recurrence relations can be established between them, which permit
such calculation. This will be illustrated for MPS = 1 and MPS = 2.
'A matching on bipartite graph is defined as the one to one mapping of the object set onto
the position set.
Number of matchings for MPS = 1
Now we are ready to illustrate how to use this information in order to
compute the number of matchings gn, for MPS = 1.
From figure 3.8 we see that the graph G1 has descendants the graphs GI
1
and Bl_ 1. If we also decide to use the symbol of a graph in lower case to represent its number of matchings, then we can express the number of matchings
V
gn as:
+n-1
bn--1
n
Furthermore figure 3.8 shows that the graph Bn- 1 has only one descendant, i.e.
the graph Gn-
2
since the object 1 has only one MPS feasible position, position
2, available to it. Hence we can conclude that:
bn-1
=
gn-2
and substituting in the previous equation we get
9n
=
gn-1 + gn-
2
(3.1)
We identify 3.1 as the equation yielding the Fibonacci numbers whose solution
is:
gn =
nearest - integer (c * 1.618n)
where c is some constant.
The number of matchings gn is an exponential function of n. Thus the size
of the MPS-Tree is also an exponential function of n.
Number of matchings for MPS = 2
As can be seen in figure 3.9 a type G2 can be resolved in terms of only
three types of graphs: G2, B 2 and C2 . From their first level expansion we can
relate them to each other producing the following three recursive relations for
the number of matchings in the respective graphs:
gn
gn-1
+
bn-
bn
n
= gn-1
=bn- 1
+
gn-2
1
Cn-1
+
+
(3.2)
bn- 2
+ gn-2 + gn-3
By eliminating the third equation of these equation we are left with a system
ofetwo coupled recursive equations in two unknowns gn and bn:
gn = gn- 1 +
bn = gn-1 +
+
gn-3
gn-2
+
gn- 4 +
bn-2
bn- 1
+
bn-
2
(3.3)
These equations are hard to solve. We have used however a trick, based on our
intuitive anticipation of the answer, to show that for large n, the number of
matchings when MPS = 2 converges to:
gn
=
(3.4)
2.333554 "
Proof: We postulate that the solution of equation 3.3 is of the form
gn
bn
t*
xnz
By substituting in the system 3.3 we get two equations for t and x:
r=
z" =
r-1
t"- 1
+
+
n-
3
t"-
2
+
+
-4
++
n-1
+
n-2
(3.5)
x"-2
There is a nonlinearity involved here that makes it difficult to eliminate
z". This difficulty can be resolved and we can obtain a closed form
solution by using our intuition in assuming that the ratio
e =
lim (")
-- oo \bn
(3.6)
is a constant. Indeed equation 3.6 was verified by recursively evaluating
gn and bn starting from n = 1 for up to n = 30.
Then from the second equation of the system 3.5 after dividing through
by z" and multiplying by t 2 we get
= t ( )"
'
+
( )"
+
E
(1)2
+
substituting for z" we get:
te
t2=
+
Csin
rearranging we have
= t2E_,E
(t+1)e
Ewe
and since for any constant
have:
lim
=
1
the RHS becomes t - 1 = (t + 1) (t - 1) and since we are interested in
t > 1 we can divide by t + 1 to get:
f
=
t-1
(3.7)
Now in order to improve the accuracy, for our appraximation, instead of
substituting t"- 1/c and t"- 2 /e for the terms z"- 1 and X"-
2
in the first of
equations 3.5, we substitute for them using the second of these equations
to get:
-
t" 1 +
z"n3 +
t"+ 3 t"~3
z"-4
then substituting for X"- 3 = t"- 3 /e and for
+
2 t"~4 +
= t"- 4 /e
-4
(3.8)
and dividing
through by t"- 4 we get:
t-
t-
**-
1
(3 +-)t
e
1
-
(2 +-)
e
=
0
(3.9)
So starting with a guess value for t we can compute E through equation 3.7, then substituting in equation 3.9 calculate a new value for t
and repeat this iteration to convergence. The iteration converged to
t = 2.33355422 and this value was again verified by recursively evaluating equations 3.2.
QED.
Thus we have shown that the size of the MPS = 2 tree is of order 2.33",
whereas for MPS = 1 it was 1.618". For higher MPS we expect that the
size will be also exponential with bigger base. This indicates that a complete
enumerative search procedure will not be attractive as n increases, or that
the imposition of the MPS restrictions does not help us much in reducing
combinatorial complexity.
3.6.3
The Number, T(MPS), Of Distinct Bipartite Graphs
In this section we derive the number, T(MPS), of graph types as a function
of MPS. Since B-graphs uniquely correspond superstates, S,, it follows that
T(MPS) is also the number of superstates in stage d of the combination graph
when MPS < d < n - MPS. Outside these limits only a subset of these Bgraph or superstate types will be available. The number of states in the MPSgraph, as a function of MPS, can then be inferred by considering the contents
of the superstates.
From figures 3.8 and 3.9 it can be inferred that in the initial MPS stages
the number of graph types increases with stage, reaching a steady state which
is maintained until stage n - MPS. Then the number of graph types declines.
We seek to compute the number of graph types that exist in any intermediate
stage.
First, note that for all graphs, irrespective of type, the removal of the first
MPS available objects and positions always results into a G type graph, since
then the remaining available object and position sets are identical. For instance,
in graph C2 1 (figure 3.9) if the first two available objects (1 and 2) and available
positions (2 and 3) are removed, a G2 type graph is obtained, namely Gn- 3
Therefore, graphs of the same index may differ only in the first MPS objects.
Alternatively, the composition of the object set possesses a "signature", i.e. a
unique coding which determines the graph type, that is drawn from a limited
set. So the question is:
"how many such signatures, or graph types, are there?"
Let us denote the number of signatures , or types, by T(MPS). In order
to answer this question, we attempt to construct all the possible graphs of the
same index, starting from a G type. For convenience, we drop the index n,
and define S. to be the available object set of a G type graph. The set SC
MPS T(MPS)
T(MPS
1
22
2
6
3
4
5
6
20
70
252
924
7
8
3432
12870
3.71
3.75
9
10
48620
184756
3.77
3.8
3
3.333
3.5
3.6
3.66
Figure 3.10: Variation of T(MPS)
is the complement of SG. We can now construct the other "graph types" by
exchanging objects between SG and SG. We proceed by noting the following:
1. For every object V E SG brought into SS, another object w E S must
be removed.
2. v can be any of the MPS largest indexed objects of SG, and w can be
any of the MPS smallest indexed objects of S&.
3. There are
MPS)
them there are another
ways of choosing i objects from SG. For each of
(MPS
.ways
w
of choosing which i objects in
SG, will be replaced.
So there are exactly T(MPS) possible signatures, where:
MPS (MPS
T(MPS) =
E
i=0
)2
Another approach to this calculation is by considering the set
M = {pi - MPS,...,pi-2,p
1,pi,p1I+1,p+2,...,pI+MPS - 1}
where pi = d + 1 is the smallest available position at stage d.
The set M contains all the legitimate candidates for the object set of any graph
with index n - d, i.e. the MPS largest objects of SG and the MPS smallest
objects of S[. Clearly we have
(2MPS
MPS
ways of constructing distinct available object sets for a given pi, by selecting MPS members of M for each of these sets, and as many corresponding
signatures.
Thus T(MPS) is given by:
T(MPS)
=
(=
(2m
The table on figure 3.10 gives the number of signatures T(MPS) and the
successive term ratios which seem to suggest that T(MPS) is of order o( 4 MPS).
Notice that the number of graph types grows exponentially with MPS.
The per stage number of states, md, can be also computed by noticing that
in every intermediate stage a B-graph corresponds to at most MPS +1 states,
since for state (, Sx), X E {G, B,.. .}, I can usually be any of the MPS + 1
largest indexed objects of Sx. Previously we saw that some B-graphs have
a unique descendant. Similarly some B-graphs may contain only one MPSfeasible state and thus the words "at most" and "usually" are necessary to
allow for such degenerate graphs.
Thus md is simply MPS + 1 times the
number of B-graph types. Furthermore the per state amount of computation
is of order MPS + 1 since in order to calculate the value of a state (the cost of
the optimal partial path leading up to it) we have to examine up to MPS + 1
states in the preceding stage.
NUMBER OF
STATES
2
"n
Space
2MPS
MPS
STAGE
n - MPS
MPS
Figure 3.11: State - Stage space of the SS-graph of the PT
The order of the size of computation for a solution procedure operating on
the MPS-graph may now be given by:
(MPS + 1)2 T(MPS) n
(3.10)
The size of computation in the SS-graph is thus linear in n, and exponential
in MPS.
One can appreciate the effect of the MPS constraints on the amount of
computation by looking at figure 3.11, containing a schematic stage-wise plot
of the per stage number of states. The integral points in the area under the
outer curve correspond to the states of the SS-graph. The inner curve encloses
the states in the MPS-graph - i.e. the MPS feasible region of the state-space.
The MPS feasible region for a constant MPS can be seen as only a small part
of the complete state-space, since potentially n can go to infinity.
V
3.7
The Chessboard Representation And The
Label Vector
We have seen how the B-graph representation facilitated our sizing of the
MPS-Tree. Along the same lines, the chessboard representation of the MPS
constraints, is yet another conceptual device which helps to develop a labeling
scheme for the set of signatures of these graphs, talked about in the previous
section. One might argue that the chessboards are not necessary in presenting
the parallel processor. They are, however, central to the development of this
research by providing a clue for creating the labeling scheme, which in turn, is
the key to the parallel processor.
A chessboard is tabular way of displaying the possible object - position
assignments at any depth, and, there is a one to one correspondence between the
set of all possible chessboards4 and the set of B-graphs of the previous section.
A chessboard is simply a tabular arrangement of the B-graph. Its columns
correspond to the available objects S' and its rows to the available positions P
and every edge in the graph is represented by the "e" sign in the appropriate
square of the table. Thus every node in the PT and its corresponding B-graph
have a corresponding chessboard as an alternative representation.
Figures 3.12, 3.13 and 3.14 give examples of chess boards for MPS = 3
and for depth of 5. The available objects are listed on the top row in ascending
order and the available positions, which are of course consecutive numbers
starting from the d +1 (where d is the depth of the node in the tree), are listed
in the first column. We will momentarily ignore the "x" sign and the labels
(0,0,0), (2,2,1) and (3,2,1) in the upper left corners.
As was the case with the graphs the chessboard at node x represents the
available selections by the "e" pattern. Selecting an object is effected by delet'The name chessboard was inspired by the 'Introduction To CombinatorialMathematics" by
Liu [23] in the discussion of permutations with restricted positions.
(0,0,0)
AVAILABLE OBJECTS
6
A P
V O
6
7
A
8
S
I I
9
78
9
10
.
11
12
.
.
.
.
.
.
*
.
.
.
.
.
.
.
.
.
.
...
...
..
.
L
t
10
A
B
L
E
I
0
N
S
11
12......
13.....
14
.
13
14
.
.
.
..
.
.
.
Figure 3.12: Chessboard (0,0,0) at depth d = 5, MPS = 3
ing the first row and the column of the object of our choice if there is a . in the
cell for that row and column. These movements correspond to assigning the
next available position to the object of our choice. In general a row pattern of
"0" 's represents feasible branchings on the MPS-Tree. We also note that the
chessboard on figure 3.14 has only one dot in the first column. Thus we must
necessarily select the first object - since it has only one position available to it and like its corresponding graph, this chessboard will have only one immediate
descendant chessboard representation.
On the chessboard of figure 3.12, corresponding to a G, type of graph, we
observe that the dots lie symmetrically around its first diagonal because its
position and object sets are identical. This is not true, however, for the chessboard of figure 3.14 where we see the symbol "x" replacing some dots. The x
symbol marks a position that is not available to the object of the column, but
would have been should the chessboard be symmetric. In fact patterns of x
symbols arise in all but the symmetric chessboards and can be considered as
the signatures of the chessboards (or graphs). If we were to redraw figure 3.8
- the evolution of the G, - with the graphs replaced by the respective chess-
boards, one could verify by inspection that this is the case. The fact of the
AVAILABLE OBJECTS
(2,2,1)
A P
jV O
A S
I I
L t
A I
B 0
L N
E S
4
5
e
e
e
6
7
8x
9 x
10
7
.
12
.
*
e
*
e
.
.
.
e
e
10
e
e
.
e
x
x
.
x
11
12
133
144
11
9
.....
.
2
14
13
.
.
e
e
e
e
.
.
..
Figure 3.13: Chessboard (2,2,1) at depth d = 5, MPS = 3
AVAILABLE OBJECTS
(3,2,1)
3
5
7
9
10
11
.
.
13
.
.
14
....
A P
6
V O
7 x
eee
A S
I I
L T
A 1
B 0
L N
E S
8x
.
x
.
.
x
.
.
9
12
x
10
e
0
.
.
xeeeeee
....
2
....
11
12
133
144 j
e
Figure 3.14: Chessboard (3,2,1) at depth d = 5, MPS = 3,
correspondence of signatures and these patterns will be established.
Each of these x patterns can be viewed as a label vector
V
=
(vi,v2,-.,
VMpS)
in WMPS (where M is the set of nonzero integers), where the component vi
equals the number of x symbols in the i th available column of the chessboard.
These label vectors, or simply vectors, appear on the top left hand corner of
their respective chessboards and can be viewed as the signatures of the B-graph
types discussed previously.
Furthermore these vectors have the interesting property that each component is bounded in size by its preceding component, i.e.
V1
'roof
V2
>
V3
>
...
>
VuPs
: In order to prove this property we use an intuitive argument. Recall
that every chessboard corresponds to some node x with a set of available
objects, listed in ascending order on the corresponding chessboard. The
number of objects with index greater than that of the i th available
object, which have already been assigned position, form the i th label
vector component.
For example in figure 3.14, objects 4 6 and 8 have been selected before
the first object, object 3. The second object, 5, has only objects 6 and
8 preceding it and the third object, 7, has only object 8 preceding it.
Objects 4,6 and 8 are missing from the top row in figure 3.14.
Then recall that the objects are listed in ascending order and therefore a
missing object with index greater than that of the i th object contributes
a unit to all vg,
j
< i where as the reverse is not true. For example the
missing object 8 contributes a unit to the vector components of objects
7, 5 and 3, whereas missing object 6 contributes a unit only to the vector
components of objects 5 and 3 and not to the vector component of object
7. So vg
v; and this completes the proof. QED
This label vector set provides a good coding scheme for the set of chessboards, or set of graph types, because these vectors are easy to generate and
store for any value of MPS. In the next chapter we will show how to do this
efficiently with minimum storage requirements.
0
1
2
3
4
5
Label
Vector
(0,0)
(1,0)
(1,1)
(2,0)
(2,1)
(2,2)
Descendant
Label Vectors
{(0,0),(1,0), (1,1)}
{(0,0), (2,0),(2,1)}
{(1,0), (2,0), (2,2)}
{(0,0)}
{(1,0)}
{(2,0)}
Figure 3.15: The 6 label vectors for MPS = 2 with lists of their descendants.
Figure3.15 shows the 6 label vectors for MPS = 2 with a list of descendant
vectors.
The following two lemmas show how to identify the objects of a given chessboard, given their stage and label vector, and how to derive the descendant
vectors for each label vector.
LEMMA 1:
If for a given node z in the PT we know its label vector,
v and its depth d then we can identify its available object set
SC
The number og of the i' available object is given by:
o0 =
d
+ i -
vi ,
i = 1,2,...,MPS
(3.11)
The same formula holds for MPS + 1 < i < n - d assuming vi = 0 in
this range.
The proof of this equation lies in its interpretation as follows:
1. Recall that d.+1 is also the first available position pi of any chessboard in depth d, and note that the quantity d + i is the number
of the object that would label the i th column in of a symmetric
chessboard chessboard (labeled by (0,0,... , 0)), or a G type graph.
We use S& to denote it's available object set. Next we form the
ordered set M by appending the first MPS objects of S& to the
last MPS objects of SG :
M = {d. + 1 - MPS, ...,I
d. - 1, d. | d. + 1,...,q dz + i, ...,I
dx + MPS}
CSa
CSe
For example with MPS = 3 and d. = 5 M is:
{3,4,5 6,7, 8}
cs'O
cs
Now a positive i accesses the elements in SJ, whereas a non-positive
i accesses the elements in S,.
2. By removing from M object x E M - Si and re-inserting it in the
beginning of M, we are shifting all other elements of M to the left
of z by one position to the right. Repeating this step for all objects
z E S, the last positions of M will contain the elements of S,. For
example with v = (2,2, 1) we can see from figure 3.13 that objects
6 and 8 are missing from S.. By displacing these elements to the
beginning of M we get:
{8,6,3 |4,5,7}
cS.
cS.
Furthermore the new object o, in position i has been shifted by vi
positions from its initial position in M. In our example objects 4,
5 and 7 have been shifted tor the right by 2, 2, and 1 positions
respectively. We can thus identify og as the object de + i - vi.
We can verify this expression using the example of figure 3.14. We have
d4 = 5 and:
for i= 1 we get oo =5+1-3=3,
for i = 2 we get 01 = 5 + 2
-
for i=3 we have 02 =5+3-
2 = 5 and
1 =7.
LEMMA 2:
Given a vector we can find the label vectors of descen-
dant B-graphs (or chessboards).
This is important because we intend, later on, to also use these vectors
as a key to code the descendant B-graphs (or chessboards) for computer
storage.
Having described the process of branching on chessboards, as selecting
and eliminating on the a column and row, we can find the vectors of
the descendant chessboards, by simply looking at the x patterns of the
descendants (this is how these patterns were originally discovered).
We can, however, observe that if the object o, in the i th column is
selected then:
" all the smaller numbered objects, to the left of o,, have now an additional, higher numbered, object missing and thus their respective
vector component should be incremented by 1.
" the objects to the right of o, maintain their status and thus their
respective vector components remain unaltered.
" the object o, is removed and so is the i th vector component. Thus
the succeeding components have their indices decreased by one and
a zero component is added to the end of the descendant vector since
the MPS+1 available object will become the MPS available object
for subsequent selections.
Based on these observations we can quote the following formula for the
i th descendant vector v'i
,
=.
of father vector v. Thus let 0 = (v, 0), then:
v + 1 if 1 <
j
< i
0+1 ifi<j<MPS+l
(
where i = 1, 2,... , MPS +1 For example the descendants of the vector
(0,0,0) are the vectors:
94
(0,0,0) for i = 1,
(1,0,0) for i = 2,
(1,1,0) for i = 3 and
(1,1,1) for i = 4.
The vector (3,2,1) has only one descendant the vector (2,1,0), because
i = 1 must be selected. Since MPS = 3 we cannot select an i > 1
because vo would then become 4 > MPS. Thus, if vi = MPS, the
vector has only a single descendant vector.
Chapter 4
The Parallel Processor
4.1
Introduction
In this chapter we use the results developed in chapter 3 in order to construct a computational device which we have called the parallel processor. The
parallel processor is a network of abstractly defined processors that can operate
simultaneously, i.e. in parallel, in order to produce the optimal permutation in
the MPS-neighborhood of given base permutation. Recall that the base permutation, in runway scheduling, is defined by the First Come First Served (FCFS)
order of arrivals, and that the MPS neighborhood is the set of all permutations
that are MPS feasible re-arrangements of the FCFS. The MPS neighborhood
is represented by the MPS-tree and its contraction the MPS-graph.
More specifically, in this chapter we describe the structure, function, implementation and fine tuning of the parallel processor. We discover that the label
vectors exhibit a tree structure, which we use to reduce the size of its required
storage. We also use this label tree structure to prove that half the label vectors have only one immediate descendant and half have only one immediate
ancestor. This result is important in determining the performance aspects of
the parallel processor which are also influenced by the additional constraints of
runway scheduling as well as the existence of a limited number of object classes
(determined by the aircraft's approach speed and weight).
We introduce some revised notation:
v is the label vector uniquely corresponding to a B-graph type. From now
on we will use the expression "v type graph" instead of G-type or B-type
etc. There will also be "v type" superstates and processors.
S, (d) is the available object set of a v type graph with
cardinality IS (d)| = n - d. The composition of S-(d) can be determined using lemma 1 of chapter 3.
S,(d) is the complement of S'(d), i.e. S (d) U S,(d) = S, and contains the
objects selected up to stage d. Further, S, (d) is a superstate, i.e. a set
of states in the state space, and a node in the combination-graph. It is
also equivalent to an instance of a v-type B-graph.
(1,S, (d)) where I E S, (d), is a state corresponding to a collection of nodes of
the PT and a member of the superstate S,(d). The object I is the last
selected at stage d - 1.
The pair (1, S,(d)) forms an equivalent state definition.
Note that v can be viewed as a generalized object set composition and
S,,(d) as a specific instance of it.
zX/1 , where X C S, is a permutation of the set X that terminates with
object I E X,
z denotes that quantity z is optimal.
Value of the state (1, X) is defined as the cost c (frX/l).
L (irX/i) is the total length - sum of the c;, 's - up to state (l, X).
Note
that L (7rX/') = c (7rX/l) if the optimization 's objective is total length.
Otherwise L (irx/l) may be needed for evaluating c (7rX/l).
We now define a new quantity:
D;(v) is the i th descendant vector, of vector v, determined by lemma 2 of
chapter 3.
Di(v) has a functional correspondence with the i th available object,
Ii E S (d), because the object set SL(.) (d + 1) is given by:
S ,(v)(d + 1) = S,'(d)
U{I}
(4.1)
where
I; = d + i - v;, i = 1, 2,..., MPS + 1,vMps = 0
by lemma 1.
(4.2)
4.2
The MPS-Graph As A Solution Space.
So far we have said that finding the MPS-feasible optimal permutation can
be done by finding the optimal complete path in the MPS-graph. We are now
ready to show how to do this efficiently. We will show that the search of the
MPS-graph can be done with the help of the MPS-combination graph which
inkturn collapses "horizontally" into a network, the parallel processor, of graph
types representing its cross-section at any stage. By providing each node in this
network with storage, a set of rules and a processing unit we can turn it into a
processor abstraction; hence the name Parallel Processor for the network.
We begin by considering the MPS-graph, for MPS = 2, shown in figure 4.1,
where we can make the following observations:
1. The feasible states (1,S,, (d)) are grouped into superstates shown as boxes
and labeled by S,(d).
We shall identify these superstates as "v-type"
superstates.
2. Since I is the last selected object of the state (1,X), its immediate ancestor
states are all the states in the superstate X - {l} of the previous stage.
For instance, all the arcs into state (5, {1,2,3,5}) derive from states in
the superstate
S(o,o)(3)
=
{1,2,3}
Alternatively we may state that the immediate ancestor states of the
state (1, S, (d) U{l}) are the states in the superstate S, (d).
3. All descendant states from a supernode can be determined using lemmas
1 and 2.
4. States in supernode S,(d) with vi = MPS have only one descendant as
we saw previously.
Figure 4.1: The first four stages of the MPS-Graph of the set of {1, 2,3,4,...
MPS = 2.
100
,
5. Certain states have no inbound or outbound arcs and are crossed out.
These states are MPS-infeasible and they do not belong to the MPSgraph. We included them however in order to show that their superstates
contain only one MPS-feasible state. We shall call this type of superstate a "singleton" state. Consider the infeasible state (3,{1,2,3,6}) , in
S(1,1)(4). This state is infeasible because by removing the last selected
object 3, the ancestor object set {1,2,6} contains object 6 which couldn't
have been selected at any stage d < 4 given that MPS = 2. However the
state (6,{1,2,3,6}) is feasible.
We can generalize this last observation by noting that if the superstate
S,,(d) contains the object d + MPS, it is a singleton because this object
is an infeasible selection at any earlier stage. However that means that
d + MPS
g
On the other hand d + MPS is the largest object
Sj(d).
that could ever be the MPSh available object in any S, (d) because the
expression
Imps
=
d + MPS
-
vmps
is maximized when vMpS = 0. Thus,
d+MPS
0
S (d) =
vMPs > 0
This result is symmetric to that of the single descendant case that we
encountered in the previous chapter when vi = MPS and can be attributed to the symmetry of the MPS-combination graph (MPS-CG).
We will elaborate on this point in the next section which is devoted to
the MPS-CG.
101
4.3
4.3.1
The MPS-Combination-Graph (MPS-CG)
Construction
Using the first two observations about the MPS-graph, from the previous
section, we can condense it into a MPS-combination graph (MPS-CG) whose
nodes correspond to MPS feasible superstates S,(d) and have a special microstructure. This can be accomplished by splitting each state (S,(d) U{1,}) into
two connected nodes, which we denote by
0
and [, and placing the [ node
into the ancestor superstate S,(d).
The resulting MPS-combination graph (MPS-CG) for MPS = 2 is now
shown in figure 4.2. Of course the infeasible states are not represented in this
graph.
Each superstate in the MPS-CG now contains some of the arcs of the MPSgraph, which connect
0
nodes at stage d to C nodes at stage d + 1. In fact
the superstates contain disjoint and collectively exhaustive subsets of the arc
set of the MPS-graph. The micro-structure of the superstate S(1 ,o)(4) is shown
in figure 4.3 with appropriate labels.
Clearly in order to find the shortest path from the root of the MPS-graph
to any [] node we only require the shortest paths to the 0
nodes in the
superstate that contains it. Then the necessary information can be transmitted
to the descendant 0 node (of the [ node) in the next stage. Evidently each
superstate can do its own optimization independent of the others at the same
stage. We will see how in the next section.
Notice that in the first MPS stages the MPS-CG contains 0 nodes which
have no arcs into them. These nodes correspond to infeasible states that are
not contained in the MPS-graph. The reason is that such a state (x, S,(d))
should have z < 0 which is absurd. Similarly in the last MPS stages we will
have C nodes corresponding to states (y, S.(d)) such that y > n. These states
102
Figure 4.2: The first four stages of MPS-CG for the set S = {1,2, 3, 4,..., n},
MPS = 2.
STATES IN
SUPERSTATE S(1 ,o)(4)
AT STAGE d = 4
ANCESTOR
SUPERSTATES
AT STAGE d = 3
< S( 2 ,I)(3)>
< S( 1 ,1)(3)>
-O<
(2,{1,2,3,5})
-0
< (3,{1, 2,3,5})-
-0
< (5,{1, 2,3,5})
DESCENDANT
STATES
AT STAGE d = 5
-
-
o
< (4, {1,2,3,4, 5}) = (4, S(oo)(5)) >
o
< (6, {1,2,3,5,6}) = (6, S( 2 ,o)(5)) >
o
< (7,{1,2,3,5,7}) = (2, S(2,1)(5)) >
Figure 4.3: A detailed look at superstate S(1,o)(4) at stage d = 4.
103
are also infeasible and as a result the MPS-graph and the MPS-CG taper off
at the end.
4.3.2
Global And Tentative MPS-Graphs Reflecting The
Dynamics Of The RSP.
These infeasible states are a consequence of the fact that the MPS-CG is a
portion of an infinite or "global" graph. In terms of the RSP the global graph
contains all the aircraft arrivals from the beginning to the end of time, or
time horizon for practical purposes. The MPS-CG contains only the currently
known aircraft which have not operated yet. The content of the MPS-CG will
therefore change as time goes on, leaving in its trail a path in the global MPSCG containing the operations that have taken place. Clearly the origin of the
MPS-CG will not always correspond to a G-type graph since at the time of the
last realized operation, defining the origin of the new MPS-CG, some of the
waiting aircraft have already been shifted backwards. We will return to this
topic when we examine the optimization in detail.
The dynamics of this re-optimization process can be seen schematically on
figure 4.4 as a series of permutation trees embedded in a global PT (GPT). The
GPT contains all past and future arrivals and at the current time we only know
up to index m. The path of * symbols represents operations. These operations
have taken place if they are to the left of root of PTm.
Notice that such infeasible 0 states as well as other states, which although
they belong to the MPS-graph, become infeasible in the course of the optimization by virtue of additional constraints, will be excluded from consideration.
Note also that the order of the 0 nodes in each superstate is irrelevant.
4.3.3
Symmetry - The Complement Label Vector.
In this subsection we take the opportunity to mention the symmetry of
104
Figure 4.4: Schematic Representation of the Dynamic environment as a global
permutation tree(GPT) containing a series of tentative Permutation Trees
PT 1 ,PT 2 ,...,PTM. The symbol * is used to represent an operation on the
runways.
105
the MPS-CG, which allows us to calculate the number of singleton states and
construct the terminal part of the MPS-CG from its initial.
To this end we define the complement label vector, vc, of the label vector v
which allows us to compute the last MPS elements of S, (d) such that the j th
element from the end of S,,(d) is given by:
l
=
d -
j
+ v;, j= 0,1,2,..., MPS - 1
i.e. V represents the objects in S,(d) in descending order.
Like v whose components count missing elements of S, (d), the component
v counts the number of objects in Sj(d) that have index smaller than ly. We
expect that
V
=
g(v;MPS)
where g is a one to one mapping that maps the set of label vectors onto itself.
There is no straight forward analytic way of describing the function g. The
simplest way of deriving g(v; MPS) is by finding the first MPS members of
S, (d) for any d > MPS, thus determining the last members of S,(d) and then
counting missing objects. The following table shows the function g(v; 2):
v:
v' = g(v;2):
(0,0)
(0,0)
(1,0)
(1,0)
(1,1)
(2,0)
(2,0) 1(2,1) 1(2,2)
(1,1) (2,1) (2,2)
We may now complete the description of the MPS-CG noting its symmetry
around the stage d = n/2. By folding the MPS-CG around its middle crosssection we would be able to have the two halves match congruently. To establish
this congruence note that we can produce the second half of the MPS-CG simply
by:
1. Swapping the circle and square at either end of each inter superstate arc
in the first half of the MPS-CG.
106
2. Flipping the resulting graph.
3. Replacing the label vectors, by their complements, v.
4. Replacing the argument d by n - d.
In fact this transformation will map the entire MPS-CG onto itself but
flipped over. The symmetry of the MPS-CG can be summarized in the relation:
S,"(d)
=
S.(n - d)
Notice that whereas the label vector, v, by defining the composition of
S,(d), induces a forward motion in the MPS-CG its complement vector, v",
defining the composition of S,(d) induces a backward motion in the MPS-CG.
In other words whereas Di(v) produces the descendants of v,Di(ve) produces
the ancestors of v.
A "v type" superstate is a singleton, i.e. vMPs > 0 according to the third
observation on the MPS-graph, if the complement vector v is single descendant
i.e. if v' = MPS. So the number of singleton superstate types equals the
number of single descendant types and in section 4.7.2 we show that it equals
T(MPS)/2.
4.4
The Parallel Processor Network, PPMPS,
From the MPS-CG in figure 4.2 it is evident that the structure of the cross
section at any stage d > MPS is the same. If we were thus to fold the MPSCG so that the superstates of the same type fall on top of each other we would
obtain a network of superstate types, which we call the Parallel Processor
network (PPM'S). The PPMPS is a horizontal contraction of the MPS-CG
and can be viewed as its generalized cross-section. In the standard notation we
can present the PPMPs by:
PPMPS
=
(pMPS, EMPS)
107
where
PMPS = {v : v
is a superstate type for the given value of MPS}
Emps = {ei = (v,u) : u = D;(v), v E PMrIs}
is the set of directed arcs.
The number of arcs out of supernode v is MPS + 1 if v, < MPS,
and one otherwise. The number of arcs into node v is MPS + 1 if
o
vMPs = 0, and one otherwise.
Previously, we said that each stage of the optimization can be carried out
independently in each superstate of the MPS-CG. Note also that the processing code is the same for all the states and all the stages. By further creating a
processor node p, corresponding to each supernode in P'S
we transform the
PPMPS into a feedback network of processors, which can carry out the optimization in the MPS-CG. We will give an overview of the function of PPMPS
before stating the complete algorithm.
The arcs in EMPS represent stage transitions. The arcs's head and tail may
be interpreted as the input and output of the connected processors respectively.
In functional terms an arc represents the address of the processor to which it
points. The processor p, will be also equipped with storage which we will call
a "mail box". The mail box will be divided up into two section, namely:
" Current - Mail
" Next - Mail
The Current - Mail contains information for each of the feasible states in superstate S,(d) at stage d, i.e. the pointed
0
nodes in the MPS-CG. The necessary
information for state (x, X), contained in a data record which we call "letter"
or "mail-item", consists of the value of x and a pointer to the letter containing
the neighbor (y, X - {x}) of (z, X) in the shortest path from the origin to
(x, X). Typically processor p,, at stage d will construct a letter My for each
108
of the descendant states (1, SD,(v)(d + 1)) and mail it to the its "addressee"
processor u = D,(v).
The letters Mi, are the result of the optimization step executed by p.. They
contain the objects Ii and point to their best neighbor state in the current mail.
Clearly the composition of the set X, in state (X,X), can be found by tracing
the pointers from its corresponding letter.
Further on, the letter M is stored in the next mail of the addressee processor
u. Thus the Next Mail of any processor, u, accumulates the letters for the states
in superstate S,(d + 1), in stage d +1, which are prepared and mailed at stage
d from the ancestor processors p, such that u = Dy(v) for some j.
Figure 4.5 shows the parallel processor network PP 2 for MPS = 2. To help
bring out the connectivity and the forward motion at each stage the next mail
box of each processor is shown to the right, with a backward arc connecting it
to the respective current mail. This arc signifies the transfer of the contents of
the next mail into the current mail, which concludes the optimization step at
stage d. The next mail is then emptied in preparation for stage d + 1.
The following important remarks are in order:
1. A processor could apply additional feasibility criteria deriving from the
additional constraints of runway scheduling, such as enforcing a FCFS
sequencing for aircraft that belong to the same class, or applying fathoming strategies based on the current cost. Later on in this chapter we
will examine these issues separately. For the moment note, however, that
the effect of the state feasibility will be variable amount of mail output
from each processor and necessarily variable current and next mail. Thus
the mail content is a dynamic quantity that can vary in principle between
zero and infinity (i.e. the memory resource limits).
2. The order in which the letters arrive in the mail box is not important.
109
STAGE d
STAGE d
-
-+STAGE d+1I
STAGEd+1
PREVIOUS
MAIL[
-
MAIL
(1,0)
- PREVIO
MAIL
US
1,0
NEXTMAIL
q
(1,1)
[
PREVIOUS
1,1
NEXT-
AI
MAIL
Figue
42,0
NEXT
CURRENT]I
* MAIL
|MAIL
(2,1)
2,1
NEXT-
CURRENT]
* MAIL _jMAIL
(2,2)2,
NEXTMAIL
CURRENT
MAIL
Figure 4.5: Parallel Processor for MPS = 2
110
3. As the PPMPS weaves though the MPS-CG it leaves behind a pointed
structure of letters which we can identify as the all shortest path tree
of the MPS-graph. The leafs of this tree are contained in the current
mailbox.
4. At the end of stage d all the "useful" letters in the current mail are pointed
at by the letters in the next-mail. The non useful letters will represent
*
states which are no one's best neighbor and thus the storage which they
occupy can be returned to the operating system. This observation is
important for the memory resource management.
5. By specifying MPS it is possible to construct PPMPS, the parallel processor network used to solve the problem.
This is a pre-processing step.
4.5
The Parallel Processor Function And Implementation
We can now state a complete algorithm for finding the optimal permutation
in the MPS-neighborhood of an initial base permutation. We assume that all
the processors have access to two arrays: a one dimensional, containing the
base permutation, and a two dimensional containing the least time separation
matrix. Notice that by i we really mean the object in position 1i of the base
permutation. The base premutation will typically contain pointers to data
concerning the individual aircraft, such as type of operation, earliest and latest
possible operation times, number of passengers etc.
First we will formally define the letter, the optimization step, and the initialization.
111
4.5.1
The Definition Of A Letter.
The letter corresponding to (1,X) is formally defined as a data record consisting of the following four fields:
1. The last selected object 1,
2. The value of the state c(-X/l),
3. The length of L(rx/l') and,
4. A pointer to its preceding state label.
As we said earlier the necessary contents of a letter are items 1 and 4.
Items 2 and 3 are only required locally at each stage and need not be stored
in the shortest path tree because they can always be reconstructed after the
optimization takes place.
4.5.2
The Optimization Step.
Here we present a formal statement of the function of processor pv.
For each object oi such that:
o E S,'(d), |d + 1 - oA| < MPS + 1
find the best permutation
s.(d)Uoi/o
= -(d)/^illo
which satisfies:
C (-s.(d)Uoi/oi)
=
mI
{c
(^s.(d)/li) + 6c (ii, oj)}
(4.3)
where bc signifies the increment in the objective function,
such that 1i E S.(d) and (1;, S,(d)) is a feasible state in the current mail.
112
4.5.3
Initialization and Termination - Adapting To The
Dynamic RSP Environment.
In general, we initialize the PPMPS by installing in the current mail of
the initial processor po a letter, RD, containing the root object and a pointer
to itself. Based on R 0 , processor pvo constructs the letters for its addresses,
so at the second stage more than one processors have mail. In turn these
processors send mail to even more processors, etc. until at stage d = MPS all
the processors have mail. Of course the mail contains only feasible states, i.e.
only the pointed 0 nodes of the MPS-CG graph (see figure 4.2).
Similarly in the last MPS stages the processors do not produce mail for the
next available object 1i if 1i > n. Thus, the amount of current mail in each
processor will be reduced and so will the number of processors that receive mail
at all. In the final stage only the processor (0,0,...
,0
has mail. Exploiting
the symmetry of the MPS-CG, discussed in section 4.3.3, we could visualize
this behavior from the front part of the MPS-CG. The number of outputs from
a processor p, at stage n - d equals the number of pointed 0 nodes in the
superstate S,.(d) in the MPS-CG.
At the last stage the optimal permutation can be traced from the cheapest
letter in the mail of processor p(o,o,...,o).
Now, the type, vo, of the initial processor is not always (0,0, ... , 0). As we
said earlier, the dynamic nature of the RSP results in frequent schedule updates,
because new aircraft arrivals render a given tentative schedule suboptimal.
In each update, requiring the complete shortest path on the tentative MPSgraph, some of the aircraft to be scheduled may have been already displaced
irrevocably from their original FCFS position. Thus, the type of the origin of
the tentative MPS-CG will be some vector:
vO
(0,0,..0)
113
In analogy to a difference equation of order MPS, one may interpret the components of v0 as initial conditions for the parallel processor. The i th component
of v0 is the number of higher indexed aircraft that have already operated.
In order to solve the problem we have to somehow account for the higher
indexed aircraft that are missing from the available aircraft set
S
=
S0 O(0)
This can be overcome by including in the array, containing the currently available aircraft, additional empty cells corresponding to the missing higher indexed aircraft, which have already operated. For instance with MPS = 4 the
initial vector vo = (4,2, 1,0) will necessitate the following modified array:
0
x
x
3
x
5
x
7
8
n
...
where the symbol "x" represents a missing object.
This can be explained
by noting that the tentative base permutation is a portion of a global one
containing all the past and future aircraft arrivals. One may also see that a
V0 = (0,0,..., 0) situation would correspond to the beginning of a new busy
period.
By inserting empty cells we pretend that the process started with processor
P(0,0,...,o) but was forced through processor p,,o which occurs for the first time
at stage d = v1, as can be verified from the MPS-CG. Consequently, the first
stage in the parallel processor is
d = v0
Evidently the effective number of available aircraft is now n + v,, since we have
inserted vo empty cells. Thus the optimization terminates at stage d = n + v.
4.5.4
The Parallel Processor Algorithm.
The parallel processor algorithm has the following steps:
114
1. At stage d = v install in the CURRENT-MAIL of the initial processor
v0 the special letter R0 containing:
" the aircraft that operated last
" a value of zero, and
" a pointer to itself.
2. For each stage d, vi < d < n + vi, each processor v does the following:
* For each feasible object oi given by 4.2 selects from the current mail
the letter R containing the optimal 1i according to the optimization
step given by 4.3
" Creates a new letter R* containing og, the value of the corresponding
descendant state (oI SDi(v))(d + 1)) and a pointer to R and mails R*
to the addressee processor Di(v).
" After all the processors have finished sending mail, they empty their
NEXT - MAIL, transferring its contents into their CURRENT -
MAIL.
3. Finally after stage d = n + vo stop. The mail of the processor (0,0,...
,0)
at stage d = n + v, contains the set of permutations:
II
=
{(*s/I) : n-MPS
< 1i < n}
The sought optimal permutation rs is the one that satisfies
8
c( r')
4.6
4.6.1
-
in
c (S)}
The Implementation Of The PPMPS
The Parallel Processor Data Structure.
From the given description of the parallel processor, as a network of proces115
sors, one should be able to construct a physical device of hardwired processing
units each capable of accessing some main memory that would contain the base
permutation and information on the objects. Since the processing code is the
same for all the processors it could be also stored in the main memory. Thus
one can construct PPMP
machines for different values of MPS. One could even
try to utilize the parallel processing capabilities of existing super computers.
These objectives suggest extensions to this thesis. They go, however, beyond
our current scope. In our experimentation we simulated the parallel processor
network using single sequential computers, namely the VAX-750 and the Texas
Instruments "EXPLORER" Lisp Machine.
Irrespective of choice of implementation, the parallel processor is a network
and as such we can represent it with its adjacency list, which we call more
appropriately Addressee list. Moreover, we need to have additional storage at
each supernode. So the complete and natural data representation of PPMPS,
for a given value of MPS, would be in terms of a table containing an entry for
each processor p,, indexed by I. The I th table entry will be of the form:
I
KEY(w)
v
ADDRESSEE LIST
MAIL
NEXT
CURRENT
OR
OR
CURRENT
NEXT
Figures 4.6 and 4.7 show the tables for PP 2 and PP 3 , omitting the mail
entries. We will describe now each of the fields:
I is the index or table address.
KEY is some function that maps v into a unique integer, as for instance the
function:
MPS
v; (MPS)MPs-i
i=1
The key is needed in the pre-processing step, for looking up the index of
the addressee processor PD(,) in order to construct the addressee list.
116
INDEX
KEY
v
Addressee List
0
0
(00)
0 12
1
2
2
3
(10)
(11)
034
235
34
(20)
0
(2 1)
(2 2)
1
3
4
5
5
6
Figure 4.6: The table representing the Parallel Processor PP 2 , MPS = 2
INDEX
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
KEY
0
9
12
13
18
21
22
24
25
26
27
30
31
33
34
35
36
37
38
39
v
Addressee List
(000)
0123
(100)
0456
(110)
1478
(111)
2579
(200)
0101112
(210)
1101314
(21 1)
2 11 1315
(220)
4101617
(221)
5111618
(222)
7131619
(300)
0
(310)
1
(31 1)
2
(320)
4
(321)
5
7
(322)
(330)
10
(331)
11
(332)
13
(333)
16
Figure 4.7: The table representing the Parallel Processor PP, MPS = 3
117
v is the label vector of processor p,.
ADDRESSEE LIST is the list of the indices - addressees - of the descendant
processors D;(v), where i = 1, 2,..., MPS + 1 if vi < MPS andi= 1
otherwise.
MAIL is the storage for CURRENT - MAIL and NEXT - MAIL which
are to be stored in the fields designated by CURRENT and NEXT
respectively. The transfer of mail that has to be done at the end of each
stage can be implemented by simply switching the labels CURRENT
and NEXT.
We must emphasize that the above table, representing the data structure
of PPPS, is set up ahead of time in a pre-processing step and then can be
used for any number of different optimizations required in order to update the
tentative optimal schedule upon new aircraft arrivals.
4.6.2
The Path Storage
Previously we introduced the concept of a letter as a data-record containing
the object and cost of a state, together with other optional information. Clearly
the result of sweeping the MPS-graph with the PPM"s produces a backward
pointed tree containing all the "least cost" paths1 which originate from the root
state2 (10,
{}) in
the MPS-graph. The leafs at any stage of that "augmenting
path tree" (APT) are contained in the mail of the processors.
If we further choose to implement the letter storage as a linked (pointed)
list of pointers to the APT leafs we can in principle add letters to the mail until
we exhaust the memory resource.
'These are shortest paths if c(r) = L(ir).
hn the root state (10, {}), the selected object set is empty and object P0 represents the aircraft
that operated last
2
118
OBJECT
COST
LABEL VECTOR
POINTER
|
CURRENT
MAIL
Figure 4.8: Path storage tree. Each node contains a data record like the one
shown on top, containing the state label.
The path tree is shown schematically in figure 4.8.
Linked lists allow for variable mail content that can adapt dynamically to
requirements. The dynamic variability of mail, as we said earlier, is a result
of the potential infeasibility of states. For example if the optimal schedule
contains many idle periods, due to infrequent aircraft arrivals, that would thin
out the number of feasible states in each stage.
The implementation of such storage in languages which support a pointer
facility, as for example PL/I, can benefit from the use of the operating system's
garbage collector. As we go along some paths that will be bad choices for all
of their possible augmentations will be lost since we will release their pointers.
(The release of the pointers is effected by the "emptying of the mail" operation
mentioned above.) As can be seen in figure 4.8 several paths from the root
119
terminate at stage d < n. As a result considerable amount of memory in every
stage will be freed up and returned to the operating system, without the user
being concerned with the specifics of this action.
LISP is another language ideal for this type of tree structures because the
linked list is its favored data structure. So the pointer manipulation is handled
by the language which provides generic functions for augmenting lists, accessing
their elements and creating printed representations. This is a tremendous gift
since in other languages we would have to supply the code for these functions.
4.7
4.7.1
Refinement In The Label Vector Storage.
Label Vector Tree Representation
As can be seen from the table of figure 3.10 on page 84, for large MPS there
is a considerable amount of storage required for the data structure, and thus
any savings on that space are valuable.
More specifically for every label vector we need to store MPS numbers.
However, for large MPS, we can reduce this storage requirement greatly, by
noting that these vectors exhibit a tree structure. The label vector tree for
MPS = 3 is shown in figure 4.9 where we see that every vector component is
represented by a record containing its value and a pointer to the next component.
The label vector tree can be generated automatically in a layered fashion.
Starting from the root, at depth i = 0, containing a 0 (that can be considered
the component vMPs+1), we construct the i th layer by inserting for each integer
j, 0 < j
MPS, a node, labeled by j and pointing to node x, for each node x
labeled by k < j in depth i - 1 (i.e. a loop with index
j
ranging over integers
1 through MPS in depth i, and a nested loop with index k ranging over the
nodes labeled by integers less than or equal to
120
j
in depth i - 1.)
MPS=3
MPS=2
MPS=1
VECTOR(1)
VECTOR(2)
0
-..
VECTOR(3)
VECTOR(4)
-
VECTOR(5)
11
2
VECTOR(6)
-- +
2
VECTOR(7)
-+
2
VECTOR(8)
VECTOR(9)
2
-
2
2
2
VECTOR(10)
VECTOR(11)
VECTOR(12)
VECTOR(13)
3
-
VECTOR(14)
2
3
3
VECTOR(15)
-+
VECTOR(16)
-3
VECTOR(17)
-
VECTOR(18)
3
s
3
3
3
VECTOR(19)
-+3
VECTOR(20)
-.
3
3
3
Figure 4.9: The label vector tree for MPS = 3. Tracing each left most pointer
will produce a distinct label vector
121
The vector field of the I th table entry, denoted as the array cell VECTOR(I),
may now simply contain a pointer to a leaf of the label vector tree, which, in
turn contains the first component of a label vector. Thus tracing a path from
the pointer VECTOR(I) to the root of the tree we can collect the vector
components for the I th processor, for a given MPS.
The advantage of the layered construction is that, if while operating at a
certain MPS value we decide to increment MPS by one, we only have to add
one extra layer of nodes in the existing label vector tree and complement the
previous layers with a few more nodes. On the other hand the label vector
tree contains the trees for smaller MPS values. In figure 4.9 a box is placed
around the contents of the array VECTOR for each MPS value < 3. Thus
going to a smaller value of MPS is even easier. Note however that we still have
to recalculate the addressee lists of these vectors.
122
4.7.2
Proof That Half Of The Processors Have Only One
Descendant
So far we have established that the label vector v has only one descendant
if vi = MPS and one ancestor if VMPs > 0 or equivalently v = MPS. So
the numbers of processors in each group are equal and we further claimed that
they equal half the total number of label vectors, T(MPS)/2.
This claim can be verified in figure 4.7 where:
1. Half of the processors, i.e. those with vi = MPS, have only one output.
They are the ones with index > 10.
2. Half of the processors with VMPs = 0 have only one input. Their indices
3,6,8,9, 12, 14, 15,17, 18, 19 appear only once in the addressee lists.
3. Evidently processors with indices 12,14,15,17,18,19 have both one input
and one output.
In order to prove the claim we need to count the number of vectors that
have vi = MPS. We will do the counting on the label vector tree of figure 4.9.
At first, we need to define the quantity zij as the number of times the
number i appears at depth MPS =
z;j
j
of the tree. The result to be proven is:
=
1T(i)
(4.4)
where T(i) is the total number of vectors for MPS = i.
Recall from chapter 3 that T(i) was shown to be
T(i)
.
=
(4.5)
Next, looking at the tree we can establish the following relation
z
=
zi-1,i
as follows:
123
+
zj-1
(4.6)
f
S
\ j
\ j
0
0
1
1
2
2
1
0
3
3
1
4
4
1
5
5
1
66
1
3
4
5
6
10
15
21
35
56
0
1'
3
0
1
4
10
0
1
5
15
35I
0
1
6
21
56
126
0
1
7
29
55
181
I
I
126
|433|
Figure 4.10: The zi, matrix. Note that, deleting the first column, the diagonal
rows, defined by: i + j = k, are the rows of the Pascal triangle for the binomial
coeficients. The apex of the triangle is the cell z0 ,1.
124
1. Recall that for a vector v we have:
V1
> VMP+1-j i> ... > vMPS
V2 > ...
j
where index MPS + 1 -
signifies that we count the vector components
from the leafs of the label vector tree.
2. Then, at level MPS =
vMPs+1.-,
j,
the number i can be added, as component
to all nodes at level MPS =
j
-
1 that contain a number,
vMPs- , less or equal to i. So
li
=
=
E=o ztj-1
=
zi-1,j +
zkj-1
+
zij-i
zi,5-1 C
Figure 4.10 shows the z;j matrix that is derived as follows:
" Start with zoj = 1,
j > 0 and zi,o
= 0, i > 0
" Apply recursively equation 4.6 to obtain the remaining zij, V i,j > 0
Looking at the diagonals rows of the zig matrix, defined by
i +
j
= constant
we can see that they are really the rows of the Pascal triangle for the binomial
coefficients
.
So we have:
zid =(i+j-)
and from 4.5
T(i) = (2i
zi,i+1
But from equation 4.6:
zi~i+
=
2 i- 1
g
2i -1
+
125
2
=2
2i- 1
i,
since
( n
n-m
)
Thus
(
T(i) = 2 2.-1
=2 zi C
In figure 4.10 the zx, terms are shown in a dashed boxes and the T(i) in
circles.
W
4.8
Performance Aspects Of The Parallel Processor
The amount of computation, defining the performance of the the parallel
processor, is the quantity:
CP T(MPS) n
NP
(47)
where
n is the total number of objects
T(MPS) is the maximum number of nodes - processors in PPMPS.
NP is the number of processing units to be employed by the processors.
CP per processor per stage amount of work. In particular CP depends on the
amounts of input and output, as shown by the following table:
# OF PROCESSORS
Cp
# INPUTS
# OUTPUTS
MPS + 1
MPS +1
(MPS +1)
MPS+1
1
MPS +1
T(MPS)/2 - T(MPS - 1)
1
MPS+1
MPS +1
T(MPS)2 - T(MPS -1)
1
1
1
T(MPS -1)
2
T(MPS - 1)
Notice that we have included in this table the number of processors in
each category. These numbers can be verified from figure 4.7, where we
can count 6, 4, 4 and 6 processors in the listed categories respectively. In
general since:
126
1. the condition vi < MPS is necessary for a label vector to be multidescendant, and
2. the condition vMPS = 0 is necessary for a label vector to be non-
singleton
the label vectors in the first category form a complete subtree tree, LVT*,
of the label vector tree, LVT, for MPS. The second condition implies
that the root of LVT* is the node, z*, containing a "0", at depth MPS =
1, in LVT. In order to derive LVT* from the subtree rooted at x* we
must further remove its nodes containing numbers equal to MPS. This
way we satisfy the first condition above.
Of course LVT* can be identified as the label vector tree for MPS -1 and
the number of vectors in it is thus T(MPS-1). Further, since T(MPS)/2
is the total number of multi-descendant label vectors, T(MPS)/2 - T(MPS - 1)
of them are singleton. Repeating the above argument for the complement
label vectors V' we can derive the remaining entries of the table.
Thus we can cast the average amount of work per processor as
=
T(m - 1)(m + 1)2 + 2 (T(m)/2 - T(m - 1)) (m + 1) + T(m - 1)
T(m)
Recalling from chapter 3 that T(m - 1)/T(m) is about 1/4 we get
op ~1 1 ((M +1)2 + 2(m + 1) + 1)
4
or
-1
~ -(MPS + 2)2
4
Of course this performance applies in the case of sequential processing.
In
parallel processing C' is bound by (MPS + 1)2, the amount of work for the
first category, because we can not change stage before the mailing has finished.
127
Now even for a modest number of 30 objects the amount of time required
on the full SS-graph is of order
n2 x
1012
whereas the time on the MPS-graph with MPS = 5 (which is a very adequate
value for MPS in the case of runway scheduling) given by 4.7 is only
n
(5+2)2 x T(5) = 92,610
4
using sequential processing, and
n x (5 +1)
2
= 36n = 1,080
if we used NP = T(5) = 252 processors. This gain in speed is very important
in the real time environment of the runway scheduling.
The performance, in case we use less than the maximum number of processors, will be further improved by the additional constraints and, in the case
of runway scheduling, by the discretization of the approach speed into speed
classes. In what follows we examine the effects of additional constraints and
the discretization.
4.8.1
The Effect Of Additional Constraints
Previously in this chapter we mentioned that some states will be infeasible because they violate additional constraints imposed on these objects. The
processors will not produce output for any infeasible descendant states. Conceivably, some processors will have no outputs whatsoever as a result of an
intervening big idle period in the optimal schedule. It may also be the case
that some of the remaining available aircraft violate their latest possible operating times.
Our intention here is only to give a simple intuitive argument for the aggregate effect of fathoming on the average amount of computation, C.
128
Thus, let
{E(No)} be the average number of outputs of a processor. Since every output
is also an input in the next stage we can approximate OP, to first order of
accuracy, by:
OP
=
{E(No)}
2
This value is an upper bound since the distribution of inputs and outputs
tQ- the processors do not perfectly correlate. As we saw earlier, the number
of inputs will differ from the number of outputs for each processor whereas in
order to maximize CP they would be equal.
In order to compute C we have to make some assumptions for the statistical behavior of No. These assumptions will depend on particular circumstances
which go beyond the scope of this thesis. We give however in the next subsection an instance where we have assessed E(No).
4.8.2
The Effect Of Limited Number of Classes
We begin by defining a class of objects as subset of S whose members share
a property that makes them indistinguishable to the optimization. Thus if we
let g, be the class of object u then we can state that the cost:
c,, =
c,,,,,,
u, v E S
In runway scheduling this property is defined as the combination of the aircraft's approach speed, takeoff weight and operation type (landing or takeoff).
By discretizing the approach speed and the takeoff weight we will get a finite
number m of distinct classes present in S where typically:
m < n
Notice that by refining the discretization we will potentially have a separate
class for each aircraft.
129
Having a limited number of classes implies that if two objects
j
and 12,
in the same class, swap their positions in a permutation, provided that this is
permitted, the value of the sum of the arc costs in the permutation will not
change since we have:
9
Cia1
=
Cia, 2
C1 ,;
=
C 2,i
V i
In case we minimize total weighted delay (TWD) the cost function is again
indifferent to the (feasible) swap because aircraft in the same class would likely
have the same delay weight. So, the additional delay incurred by one object
will be counterbalanced by the delay savings of the others.
Letting k, be the number of objects in class
j
there are exactly ky! variants
of any permutation irs, perhaps not all of them feasible, produced by permuting
only the objects in class j. Thus in a PT search procedure we could potentially
reduce the amount of search by the sizable factor
ki! x k2 ! X ... x km!
by imposing the condition of maintaining the FCFS order within each class.
Now for the PPMPS we can see that maintaining the FCFS sequence is
simple. Since S.(d) is already in the FCFS order, processor v will only choose
for position d + 1 the object Ii E S'I(d) if there does not exist Ii E Sc (d) such
that j < i and gi = g,.
The first consequence of this strategy is that the expected number of outputs, E(No), equals the expected number, i!(MPS) of distinct classes, within
the first MPS + 1 objects of S(d).
The second consequence is that the expected number of inputs also equals
the number, 3(MPS), of distinct classes within the last MPS + 1 objects of
130
S,,(d). Moreover, away from the initial and final stages we expect:
3'(MPS)
=
(MPS)
Thus we can express the average per processor computation as:
O
=
E(No) 2
=
3(MPS)
2
We expect of course that:
3(MPS) -
MPS+1 as m -n
and (MPS)
as m
Finally, notice that maintaining the FCFS order within each class results
in modified state definition (p, X), X C S, where p is a distinct class.
131
Chapter 5
Applications Of The Parallel
Processor In Runway
Scheduling With Takeoffs And
Multiple Runways.
5.1
Mixed Landings And Takeoffs On A Single
Runway.
So far we have developed the parallel processor algorithm and shown that it
can find the optimal MPS-feasible permutation in time linear to the number, n,
of aircraft and utilizing a number of processing units, given a cost structure that
satisfies the triangular inequality (TI). Introducing the takeoffs complicates
the runway scheduling problem because the cost structure, ei, representing
the least time separations, will potentially violate the triangular inequality. In
chapter 2 it was shown that this potential TI violation (TIV) necessitates the
modification of the time evaluation rule so as to ensure proper spacing of the
landings. In this chapter we show how the parallel processor can deal with
the increased complexities of the optimization. The modified operation of the
parallel processor results in a generalized definition of state which is related to
our previous discussion of feasible state-space. This definition is used in order
132
to compute the size of the current mail which determines the performance of
the individual processor. In general, the size of the current mail will increase
drastically. We will show, however, that the specifics of runway scheduling can
be exploited in order to restrain it.
5.1.1
u
The Parallel Processor Operation
First note that the parallel processor is a "forward" machine that makes de-
cisions for its available objects only. The individual processor is not concerned
about the size and content of its current mail. On the other hand, the current
mail is determined by the function of the processors in previous stages. So in
order to to analyze the size and content of the current mail we have to examine
the function of the processor. In turn the content and size of the current mail
is needed in order to determine the processor's performance.
Before we begin we will assign symbols I and t to the operation types landing
and takeoff. Now the triangular inequality (TI):
CL
1 ,12
CLI,
2
+ CZ,j2
which is satisfied if x is a landing may be violated if x is a takeoff. In general
we may be able to insert up to m takeoffs between consecutive 1i and l2 without
stretching the interlanding spacing, i.e. the inequality
CL
11,
2
>
CLI,.
2
+
C2 1 ,2 2
+
...
+
C.M ,..
+
Cgm,
2
may be true.
The first consequence of this inequality is that in order to evaluate correctly
the total length L(irX/*I) and total cost' c(7rx/t I) we have to trace back to
the last landing preceding the takeoff t in 7rX/*. This operation which in general
will require order n number comparisons can be reduced to order 1 by storing
in the letter a pointer to the previous landing.
IRecall that the cost and length are identical if we minimize the total busy time.
133
The second consequence of possible TIV's is rather more severe in terms of
the optimization because we are now required to increase the number of letters
mailed to the descendent processors. Specifically the function of processor p,
will depend on the operation type of the next available object o.
CASE 1: og is a landing.
Given that the next available object, og, is a landing processor pv will examine all the letters in its mail and choose the one that lies on the shortest path
to the root of the MPS-graph. The length of the path can be determined using
the time evaluation rule suggested in chapter 2, that will ensure the correct
spacing for the landings.
CASE 2: oi is a takeoff.
Things get more complicated if og is takeoff t. Consider first the situation
where we have two paths:
.S(d)/1
and 7rS.(d)/1 2
in the current mail at stage d, terminating with landings 1i and
12
respectively
and assume that the total costs of these paths extended by the takeoff t satisfy:
c(7rS(d)/1l||t) < C(rs.(d)/'
Clearly, in the absence of a possible TIV, we would prefer path 7rs-(d)/|It to
be mailed to the descendant processor PD,(v).
Consider however the situation in the next stage d + 1 where the next
available object ok, of the descendant processor PD,(v), is the landing P*. It is
conceivable now, that:
c(rS.(d)/11 |t||l*) > c(rs'(d)/12It|I1*)
134
i.e. that the permutation 7rS(d)/L2 in the current mail of p, in stage d minimizes
the total cost for a larger set.
Clearly the processor p, has a dilemma if oi = t and can not decide upon
which letter in its current mail is the best for the optimization in a global
sense.
This decision can only be made at a later stage.
Thus, when the
next object is a takeoff, p, has to construct a new letter for each letter in its
current mail.
In the above example we examined paths in the current mail ending with
landings. The current mail will also contain letters corresponding to paths
which terminate with one or more consecutive takeoffs. We will partition the
paths in the current mail into the sets
71 containing the paths that terminate with m, 0
< m < m
takeoffs,
and
T2 containing the paths that terminate with m = mm, takeoffs.
The number mm,
is determined so that the total length of the terminal con-
secutive takeoffs, including the currently available takeoff, og, is larger than the
largest possible interlanding separation c,,,
c.,
Typically m,
= max
given by:
{cil}
will differ among the permutations represented by the letters
in the current mail. Now any permutation in T2 is certain to preclude a TIV.
So for each of these permutations we can choose just one. However for the
permutations in T; we can not make a decision and we have to create a letter
for each one of them.
5.1.2
The Contents Of The Current Mail - The Gener-
alized State.
In order to relate to our previous discussion about the state-space and the
135
MPS-graph, we can see now that the state definition is no longer of the form
(X, X). In general, the current mail will contain many paths that terminate with
the same object x and have different cost. In order to uniquely characterize
the paths in the current mail we have to adopt a more general state definition.
From the revised contents of the letter one might be tempted to think that the
state definition should be of the form:
(1,t, X)
where l is the last landing preceding the last takeoff. In general, however,
this is not true, because there may be more than one terminal takeoffs which
have to be optimally scheduled. We may thus care about their number and
arrangement.
Consider for example the detailed function of a PP optimizing the following
FCFS sequence:
(1o; 1i712, t31t41 5, t6717),...)
The first steps of the optimization are shown in figure 5.1 where each processor, in stage d, is represented by a box. The terminal portions of the paths
corresponding to the letters in its current mail, are shown to the left and the
available objects are listed to the right.
Consider, further, the paths ending with
'2,
t4, t4 and
12, t4
in the current
mail of the first processor, in stage d = 4. If we accept the state definition to
be
(l,t,X)
then we should select one of these paths, namely the one which minimizes the
operating time of t4 . The following scenario, for the implied operation times
of the these paths, shows, however, that such decision may lead to suboptimal
schedule:
136
Figure 5.1: The first few stages of the PP' operation.
A:-
t
12
15
t4
OP. TIME
B:
OP. TIME
l2t
15
4
Schedules A and B correspond to the paths ending with 12, ta, t 4 and 12, t4
respectively. Clearly at stage d = 3, and based on the operating time of t4 , we
would have selected schedule B over A. In stage d = 4, however, schedule A is
superior since it operates Is sooner. Thus at stage d = 4 we need both paths
ending with 12 ,t,t and 12,t 4 .
This example suggests that states corresponding to paths in the set T will
be of the form:
(1, tmj tin-1,..., ti,7 XM=0,1,2,)...,
lm.
- 1
For m = 0 this definition yields (1,X) since we do not need to look back
beyond 1.
This definition seems to suggest that we need to keep all the feasible permutations of the set
T
=
{tm,t.--1,. .. ,ti}
137
containing the terminal takeoffs. This is not necessary, however, because in
the course of the optimization the processors can keep only the optimal permutations 7rT/.
In other words we are only interested in the composition of
T and the optimal paths in it that end with a distinct elements. So the state
definition above should be corrected to:
(1,(t, T), X), t E T, T C X
As one might have suspected, this generalized definition contains the simple
state definition (t, T) for the takeoffs which are assumed to satisfy the triangular inequality. The landing I serves as the initial condition for the imbedded
optimization.
The states corresponding to paths in T2 will also be of the form (t, X).
Given the fact that we have enough takeoffs to preclude a TIV, we can ignore
previous landing 1 and the embedded optimization resolves into the primary
optimization. In our example, if the sequences t4 , t 3 , t 6 and t3 , t4 , t 6 are long
enough to preclude a TIV then there will be only one state, namely the state
(t, {11, l2, t 3 ,
4
, t6 }), of the form (t, X), as output corresponding for t6 from the
first processor at stage d = 4.
The number of states in every stage explodes and if m,. approaches the
current stage d then the states will correspond to nodes in the MPS-tree. In
other words there is an exponential number of states in the worst case.
Before going on to show how the specifics of runway scheduling reduce the
number of states, we want to briefly present a picture of the individual processor
and count the contents of its current mail.
Figure 5.2 shows schematically the current mail and the available selections
for a typical processor p.. For counting purposes the states in the current
mail are grouped according to their terminal element.
We have listed first
the states corresponding to paths ending with a distinct landing, followed by
138
INPUTS AND OUTPUTS OF PROCESOR P,
AVAILABLE OBJECTS
CURRENT-MAIL
-< (L -, S. (d)) > O0
-+<
-<
(L;,s,(d)) >0
(L;, ,S,(d)) >
L
-<
(L,Ti, S,(d)) >
< (L, T,TI-,
+<
0
O
< L' > -one
o
<
L
>-
path
one path
0
S.(d)) > 0
O <
L;y
> --
one path
(L,T,T,T{,S.(d)) > 0
(L,T2, S.(d)) >
0
-- +< (L, T,T2, S.(d)) >
0
-<
several
paths
...
<(LTTT S,(d)) >0
-- < (L, T;2,., S. (d))
0
-+
(--
several
paths
> O
<T;. >
-- < (L, T, T , , S. (d)) > O
+<(L,
< T2_ >
sever:
pathg
T, T, Tk , S, (d)) > O
a) Superscripts "'" & " on an object indicate membership of sets S. (d) & S,'(d)
respectively.
b) The L 'a and T 's in suffix L, T, T,..., T are distinct objects.
Figure 5.2: Schematic representation of the input (current-mail) and output
(available-objects) of processor P,.
139
states corresponding to paths ending with a distinct takeoff. There is typically
more than one state, shown in the same box, for each distinct terminal takeoff.
Consistent with the definition of (t, T) we have listed the set T distinguishing
only the last object is marked. The remaining objects are of the list represent
only distinct composition.
As can be inferred from the multiple appearance of dots, the number of
paths is quite large. In runway scheduling, however, we can ignore the arrangement of the terminal takeoffs thus reducing the number of feasible states.
We will now see how.
5.1.3
Restrictions On The Feasible States.
There are two principal ways of reducing the number of feasible states, given
the specifics of runway scheduling.
1. We expect that the minimum number, m.,
of consecutive takeoffs that
can preclude a TI is at most 2 or 3. Thus from all states in the current
mail that terminate with say 3 takeoffs we can safely select just one,
irrespective of the operation type of the next available object.
2. We can reduce the number of feasible states by maintaining the FCFS
order for takeoffs. Processor p, can enforce this policy by simply mailing
out paths only for its first available takeoff. As a consequence there is
only one possible (t, T) state for each value of m.
The policy is justified by the fact that cit, ctl and
Ct,4
2
have small and
comparable mean values and negligible variance.
The current mail resulting from applying these restrictions is shown on
figure 5.3. The takeoffs are indistinguishable to the optimization. Thus the
state may now be re-defined as
(m,I, X)
140
TAKEOFFS TREATED AS A SINGLE CLASS
INPUTS AND OUTPUTS OF PROCESOR P.
AVAILABLk OBJEUTS
CURRENT-MAIL
-<
-<
-<
-<
(L
(L,
, s.(d)) >
T',
S,(d))
>
o
< L1 > -+ one path
< L; > one path
o
<L.
0
> -one
path
O
(L, T, T', S.(d)) >
0
(L, T, T, T', S.(d)) >
0
-<
o
--<(L'1, S(d)) > O0
(L;,S,(d)) > 0
-O ...
several
paths
O < T2 >no paths
O < T'
> no paths
a) Superscripts
&
on an object indicate membership of sets S. (d) & S,(d)
respectively.
b) The L in suffix L, T, T,..., T is a distinct object whereas T is not.
Figure 5.3: Schematic representation of the input (current-mail) and output
(available-objects) of processor P, when the takeoffs are treated as a single
class. The amount of input and output are considerably reduced.
141
because given that there are m terminal takeoffs we can now uniquely identify
them as the last m takeoffs in the selected S,,(d).
5.1.4
Counting The Number Of Letters In The Current
Mail
Although we are mainly interested in the size of the current mail under the
RSP restrictions it is just as easy to calculate its size in general. Then the size
under the restrictions is simply a special case.
In figure 5.2 we see that there are NL paths in the current mail each terminating with a distinct landing. There are also a total of NT clusters of paths,
shown in boxes. The paths in each cluster terminate with the same distinct
takeoff. In order to count the number Nx states (I, (t, T), X) in the current
mail we will first account for all the possible permutations 7rJ. The resulting
number N* can then divide by the number the possible rearangements of the
m = ITI objects to produce Nx.
From our previous arguments in chapters 3 and 4 we can state that the
total number of distinct objects, candidates for position d, is:
T
MPS +1
if VMPS>O
1
otherwise
Calculating The Expected Values Of N And NT.
In order to find the average number of states S* in the current mail we
need to define the quantities fL and fT, where fL + fT = 1, representing the
fractions of landings and takeoffs in S. The following remarks are in order:
1. Recall that since we assumed a Poisson arrival process fL and
fT
rep-
resent the probabilities of the next object being a takeoff or a landing
respectively.
142
2. The quantity k x fL represents the expected number of landings in a
string of k consecutive arrivals.
3. Recall, from chapter 2, that the last MPS elements of S,,(d) are selected
out of the set M containing the last MPS objects of S(o,o...,o)(d) and the
first MPS objects of S"O,0---,0)(d). Since the composition of X is determined by the Poisson process the factions fL and
fT
should describe its
composition.
4. Averaging the fraction of landings over all the possible sets X C)X, such
that X U X* and IXI = IX*I = MPS, should yield the value of fL. This
can be seen as a result of the symmetry in the Poisson process that requires the expected composition and arrangement of objects in the constituent subsets of M to be identical. So the fraction of landings averaged
over X should equal that averaged over X* and, since we exhaust all possible partitions of M, the averaging shows no preference in favoring either
fL or fT.
NL
SNT
M= 0
1
N
NN
NT
NT
2
Nr
Nr
3
ma.
-
1
Mnme:
Figure 5.4: Tree for counting the number of feasible states.
Since, further, all the elements of X plus the element 7riFCS are candidates
for position d, we conclude that the average number of landings, NL, and
takeoffs, NT, in X are given by:
NL
=
NT
=
(MPS +1)
(MPS +1)
x
x
fL
fr
(5.2)
These values are overestimates for two reasons. First, because some states are
singleton, i.e. NL + NT = 1 and second, because some states may turn out to
143
be infeasible in the course of the optimization.
Using similar arguments for stage d - 1, then d - 2 etc., until d - m we see
that the values of NL and NT could also approximate the average numbers of
landings and takeoffs that could go into positions d - 1, d - 2,..., d - m.
The number of states can be counted on the tree of figure 5.4. There are
two kinds of nodes in the tree corresponding to landings and takeoffs which can
be described as follows:
1. A node in depth m corresponding to a takeoff contains the numbers of
ways in which the object w
of permutation rS.(d) can be a takeoff.
From our discussion above, this number equals NT for all positions, provided d - m > MPS.
Given m < m,.
we can branch to another node at depth m + 1 that
corresponds to either a landing or a takeoff. Otherwise there are no more
branchings since a TIV is now precluded from the definition of mma..
2. A node in depth m corresponding to a landing contains the numbers of
ways inways
which the object 7rS(d)
r-S.(d) can be a landing.
~d-m, of permutation
eru
From our discussion above, this number equals NL for all positions, provided d - m > MPS.
Reaching a landing we do not need to branch any more.
Defining the "value of a leaf" as the product of the numbers in the nodes
from the leaf to the root, the average total number of feasible states, FV*, is the
sum of leaf values, i.e.
=
This relation shows that
L (1,N1TINR,...
'* has
I
mV..-)
(5.3)
the potential of becoming an exponential
function of the stage d, tending in the extreme to the number of nodes at stage
d of the MPS-tree.
144
It remains to count the number, M*, of feasible rearrangements of the
objects in T. Our best guess for the moment is M* = (m - 1)! since we only
care for the permutations of the set T - {t}, where t is the terminal object.
Regretably, the MPS contraints will restrict the movement of objects within
the final m positions thus rendering M* an overestimate.
Notice that the explosion in the number feasible states is a relatively local
effect. Upon arriving at a processor, whose entire selection consists of landings, the number of paths mailed by the processor will return to the "normal"
maximum of MPS + 1.
Counting The Restricted Number Of States.
Evaluating the number letters in the current mail is a simple matter. The
result of imposing the FCFS order on the takeoffs will result in the value of
NT = 1. Furthermore the summation has only m.
= 3 terms. Substituting
in equation 5.3 we thus get
Nx =
L(1 + 1 +1)
+1
= 3 fL (MPS + 1) + 1
i.e. roughly 1.5 times the number of states given a single operation type and
fA = .5.
5.1.5
General Remarks.
In summary of this section, we have shown how the parallel processor can
handle the additional load required by the generalized state definition. We
have also shown that the circumstances of RSP allow for simplifications which
reduce the potential explosion of the number of paths to order MPS.
Our computational experience has shown, so far, that even if we were to
simply use the state definition of (X, X), irrespective of the operational type of
x, the solution would be still very good. This can be explained by the large
145
number of near optimal schedules. This large number is a consequence of the
fact that the range of ci, values, when i and j are not both landings, is rather
small, of the order of 40 seconds, and has a small variance. As a result the
optimal schedule, given a balanced number of takeoffs and landings, would tend
to a total length 40n seconds and there are many possible ways of achieving it.
Notice that the magnitudes of errors, due to random fluctuations in the
system, do not justify a greater accuracy in the scheduling.
146
5.2
Multiple Runways
In this section we conclude the runway scheduling problem by analyzing the
issue of multiple runways. For the moment we will assume that an aircraft can
operate on any of the available runways. However, it is only a trivial extension
to allow for each aircraft individually a set of permitted runways, determined
by the aircraft landing requirements such as the maximum landing/takeoff distance, cross- and tail-wind components, runway surface in case of wet weather
etc. In fact such additional restrictions are desirable because they speed up the
computation, by reducing the number of feasible paths.
We will begin with the case of landings only and then show how to incorporate mixed operations. Again the state definition has to be modified in order to
account for the runway on which the last operation took place. Moreover, the
triangular inequality will now affect landings on separate runways, especially if
the latter are crossing and open parallel, thus affecting the solution.
In general, we will assume that we have a FCFS sequence of aircraft that
are ordered according to their earliest possible operation time:
E; = min {E}, r E
where R is the set of runways.
5.2.1
The Parallel Processor Operation.
With multiple runways we have to store in each letter the runway on which
the contained object has operated. This way a sequence of letters defines a
complete operating schedule.
The individual processor p. at stage d must now assign the next available
object og to each one of its permitted runways. It will produce thus 1R 1letters,
containing object oi and the assigned runway, that will be all mailed to the same
addressee processor PD,(v)- Processor p, will use the modified time evaluation
147
rule (see chapter 2) in order to ensure the proper inter-runway separations.
Specifically, in order to calculate the correct operating time L(rs.(d)/(r)) the
processor will now have to examine the last operation on each of the available
runways. To do this the processor will have to do a backward tracing for each
of the letters in its current mail.
To ease the implementation we can have for each letter a vector of pointers,
wiere the r th vector component will point to the last operation on runway r.
Assigning oi to runway r may be implemented by producing a new pointer
vector (PV) which will be the same as the PV for the corresponding best letter
in the current mail, except that its r th component will point to the new letter
containing og and r. The storage for the PV's can be temporary, since each PV
will only be used in only one stage.
5.2.2
Possible TIV For Crossing And Open Parallel Runways.
With multiple runways, the triangular inequality violation is a possibility
even for the case of landings only. This possibility is usually the case with
crossing and open parallel runways. Consider the case of crossing runways.
Typically, the least time separation
C., ,,,,
of objects z and y, on runways r, and r, respectively (r. 5 ry), will be of the
order of the runway occupancy time of the preceding aircraft x, i.e. close to 30
seconds.
Along the same lines, as in section 5.1.1 for the takeoffs on a single runway, it is again conceivable that the letter in the current mail of processor p, that minimizes the operating time of object oi E S,'(d), on runway ri,
in stage d will not be the one that minimizes the operating time for object
ok E S,()(d + 1), on runway ri, in stage d + 1.
148
The implication is that we will now should have to generate more than one
paths for each possible (o,, r) pair. The resulting state definition should be of
the form:
(r, z 1 , X2 ,..
zX, X)
where r is the runway of the last operation and x is the last aircraft to operate
on runway i E R.
The way of choosing which paths to transmit to PD,(v) is quite convoluted
and we do not intend to describe it here. The justification for doing so, is that
the optimization produces very good results, even if we simply select from the
current mail the letter that minimizes the time of operation of og. This can be
explained by the following two facts:
1. As before the range of least time separation values is rather small.
2. The number of paths in the current mail, which under the restricting
policy is bound by |I(MPS+ 1) (i.e. there are
IRI
paths for each object
candidate for position d), is still large enough to allow for discovering a
good schedule.
The state definition under this restriction is simply (r, X, X).
The case of open parallel runways is similar. Notice, however, that because
the inter runway separations of landings on close parallel runways are of the
order of their corresponding single runway separations, a TIV is generally precluded. So the state definition (r, x, X) is totally sufficient for close parallel
runways.
5.2.3
Mixed Operations On Multiple Runways.
Clearly by including mixed operations on multiple runways is going to make
the general unrestricted problem very cumbersome indeed and we will not attempt to describe neither the modifications of the parallel processor's function
149
nor the implied state definition. We believe that the outlined circumstances of
runway scheduling make the simple state definition (r, z, X), irrespective of the
operational type of x, capable of producing a very good schedule.
The next section in this chapter is devoted to the study of a randomly
generated case that supports this claim. In fact we will see that the mean
service time reduces to about 30 seconds, i.e. to the average runway occupancy
time.
5.3
5.3.1
Case Study.
Case Description.
In this section we analyze a randomly generated case, with the following
characteristics:
Runway configuration: shown in figure 5.5 contains 3 runways. Runways R 1
and R 2 are close parallel and cross R 3 at right angles.
Traffic mix: 40% takeoffs and 60% landings.
Total number of aircraft: 120.
Arrival Process: Poisson. The earliest possible operation times are distributed
over the period of one hour.
Inter Runway Separation For Crossing Runways: constant 30 seconds.
Single Runway Separations: cit = cq = ett = 50 seconds.
Close ParallelRunway Separation: cl,,,,,,, = Ct,rt,1,r = 0 if runways r, and rt
are parallel.
Approach Speeds: 90, 100, 120 and 150 Knots. (spaced such that 1/v = constant.)
Aircraft Types: Heavy (H), Medium (M), Light (L) and Takeoff (T).
150
Remaining Time Separations: computed according to rules of chapter 2.
Delay weights: equal to 1 for all aircraft.
5.3.2
Results
The results of this case are presented with 4 graphs and a table. We start
with figure 5.6 which shows graphically the aggregates of aircraft that have
arrived and left the system as a function of time. The uppermost curve in each
graph represents the arrival process. The remaining curves are the result of
optimal scheduling for different values of MPS, with minimum Total Busy Time
(TBT) objective. The lowest curve is of course for the FCFS schedule (MPS =
0). The slope of the curves, corresponding to the service rate, is increasing with
MPS. Notice that even for MPS = 1 we get a significant improvement in the
service rate. For MPS > 4 the schedules almost overlap and catch up with
the arrival process. In other words the total capacity is almost 120 aircraft per
hour whereas it was only 80 aircraft per hour for MPS = 0.
Similarly, figure 5.7 shows the same curves but for minimum Total Weighted
Delay (TWD) scheduling. The results are very similar to those of TBT scheduling. Before we discuss the similarity of these results we want to present figure 5.8 which displays graphically the summary of results contained in the table
of figure 5.9.
In figure 5.8 we have two sets of curves. The upper set contains two curves
that plot the values of TBT, resulting from the TBT and TWD minimizations
respectively, as function of MPS, and normalized by the respective MPS = 0
value. The actual values are shown in the table of figure 5.9.
Again we note the drastic improvement in the value of TBT that occurs
even for MPS = 1. We also note that these curves are very similar and
crossing over at a couple of points. This similarity is not surprising because we
would expect that the total weight of subsequent aircraft induces a minimum
151
TBT scheduling which is more pronounced in the beginning. The crossover of
the two curves, paradoxical as it may seem, is insignificant can be attributed
to the following two reasons. The first reason is the small inaccuracy in the
calculation of TBT. For simplicity we had the idle periods in the schedule begin
at the time of the last operation, discarding the runway occupancy of the last
aircraft. The second reason can be cited in the incomplete state definition used,
of the form (r, x, X), which disregards the possible TIV's.
The lower set of curves in figure 5.8 plot the values of TBT, resulting from
the TBT and TWD minimizations respectively, as function of MPS, and normalized by the respective MPS = 0 value. Notice the dramatic drop in the
value of TWD which appears to be exponential. For MPS = 6 this value of
delay incurred dropsto as low as 6%. On a second plan, note also the similarity
and slight cross over of the curves, which are attributed to the same reasons as
above. The TWD minimal curve is however lower for the most part.
One might also contemplate that the equal delay weights played a role in
the optimization. In general we would expect that unequal weights will affect
more the total time and TWD value rather than the TBT value.
In the table of figure 5.9 it can be seen that the average number of shifts
per aircraft ranges between .5 and 2 for the all the values of MPS. These rather
small values will have a favorable the task of finding the confict free space-time
paths, which should be flown by the aircraft in order to implement the schedule
optimal schedule.
Finally, in figure 5.10, the FCFS schedule is contrasted with the TBT and
TWD optimal schedules for MPS = 6. The operations for each schedule are
shown under their assigned runway and are positioned vertically according to
the operating time. The time axis, calibrated in seconds, is shown to the left.
There are three columns for each schedule, since there are as many runways,
and the coding used for the individual operations has the following format:
152
(AIRCRAFT INDEX -
AIRCRAFT TYPE -
AIRCRAFT SPEED / 10)
Note that in the FCFS schedule we have allowed higher indexed takeoffs
to precede earlier landings on the close parallel runways (see discussion in
chapter 2).
The quality of the optimized schedules can be deduced from their density
of operations. The optimized schedules take roughly a total of one hour as
opposed to the a total of about 2 hours for the FCFS schedule (the last part
of the FCFS schedule is not shown.)
153
R1
Figure 5.5: The Runway Configuration.
154
MINIMIZATION OF TBT
NUMBER OF AIRCRRFT (TBT)
120
lee
8
66
40
20
a
0.0
0.5
"
1.0
1.5
2.0
2.5
3.0
3.5
4.8l
1000 seconds
Uppermost curve represents the arrival process.
" Lowest curve represents the FCFS schedule.
" Intermediate curves correspond to optimizations with MPS = 1,2,... ,6, starting from the bottom.
" Curve slope measures total runway capacity for the curve's value of MPS.
e Notice that the capacity for MPS > 4 catches up with the arrival rate.
Figure 5.6: Results of TBT minimization.
155
MINIMIZATION OF TWD
NUMBER OF AIRCRAFT
0.0
0.5
(TWD)
1.0
1.5
2.0
2.5
3.0
3.5
4.0
1000 seconds
Figure 5.7: Results of TWD minimization. The results are similar to those of
TBT minimization, in the previous figure.
156
100.0-
TWD OBJECTIVE
S-
TBT OBJECTIVE
f%TBT VALUE
60.07.
40.0%-
20.07.
TWD VALUE
-X
e0. M
I
0
1
2
4
3
5
6
MPS
Figure 5.8: Graphic representation of the aggregate results. The upper two
curves show the variation of the TBT as a function of MPS. The lower two
curves show the variation of TWD. The units are normalized, expressed as a
percentage of the MPS = 0 cases.
157
SUMMARY OF RESULTS
OPT-TYPE
-
TWD
TBT
TWD
TBT
TWD
TBT
TWD
TBT
TWD
TBT
TWD
TBT
MPS
TBT
%
TWD
%
0
5238
100.00
137572
100.0
0.00
1
1
2
2
3
3
4
4
5
5
6
6
3605
3625
3200
3221
3048
3026
2940
2648
2774
2713
2497
2514
68.82
69.21
61.09
61.49
58.19
57.77
56.13
50.55
52.96
51.79
47.67
48.00
36186
36549
22455
22028
13632
17203
13581
12025
10800
13135
8612
10804
26.30
26.57
16.32
16.01
9.91
12.50
9.87
8.74
7.85
9.55
6.26
7.85
0.55
0.45
0.91
1.09
1.31
1.42
1.93
1.70
1.85
2.07
1.77
1.74
Figure 5.9: Summary of results.
158
AV.-SHIFTS/AC
TTME
SEC'S
R2
R3
R1
R2
6-
R1
2M12
31152
*266
4T
3I
2M12
6T
IlS
-
4T
R2
115
6T
7H14
8ST
IILIS
3M15
OT
14H15
91'
12H155
9T
13M15
12L. 9
16T
17T
7H10
9T
14L12
19T
2116
15L 9
24T
13125
600 -
*
1.
9
14L
12
21L12
19T
24T
15L 9
29T1
234 9
25M15
219T
766 -
274.16
31L15
27L16
32615
i13 9
17T
S9m
-
2*116
22M15
-
140
231 9
25M15
12-00
26iM12
61%
-
27L1.2
146636T
617i6
35T
291 9
44"12
34415
30T
41T
37H15
46M15
43T
45T
5L15
54M12
47T
49T
35T
344.15
361
37H15
36T
42MI5
3412
39M 9
341T
41T
40H12
44M412
457
*220
-
60T
53T
62Ht2
53T
60T
61M15
59L 9
45!
52H15
62H12
67H 9
561126
66T
68T
7
9
61M15
69m16
66L12
76T
75T
66L12
43T
721H12
76T
742.1
79L.12
52415
62T
78H15
65H15
571415
92T
84L12
957!
94T
93M15
891415
3L 9
73.1
99T
101149
S1H12
SOT
89H15
91M11
84L12
92T
94T
90M15
100H16
10012
71H12
85H15
96L12
97
96L12
971'
61M15
63T
90M15
58126
60T
73L 9
21'
87LIS
67210
91Mie1
0615
-
74L.1
77L25
58L15
2486 -
62100
77615
79H15
69M12
83T
-
63M12
72H 9
64L5
713L 9
75!
55H16S
62500
43T
615
5715
63M12
46M15
56115
629660
3715
54M 12
67H 9
54M12
-
3215
57H15
596 9
51M15
220
325
42M15
47T
44M12
40M15
31215
344.25
36T
55126
65T
40H12
&26
25M15
361'
49T
521415
331412
3*1 9
2600
44M12
42M15
-
-
29M 9
221115
53T
31L153
32615
166
011
26A12
219
33H12
9
261
36T
29M 9
361'
21L12
6100
2612
3
1415
13m15
21L412
23H 9
3M15
I 1L15
10L15
22M415
1'
16T
17T
12L 9
6 SM -
R3
93115
95T
901115
99T
le11 9
62H12
*2788 -
63M12
64L15
65T
1
04T'
106M
66T'
63166
102H112
69126
67H 9
1941'
103H015
66L12
105H
12
15
102H12
105H15
106M12
107M25
107M15
16.12
-
103H15
16*H15
18H125
721H12
7214 9
34675T
0356-
74L16O
73. 9
76T
77115
11m15
110m15
112T
111L12
114H12
115T
1171'
116L12
12H12
119T
113M16
11M15
112T
111I12
114H12
115T
117!
116L16
120H412
119T
76H15
60 -
79L12
63H12
Figure 5.10: Detailed schedules for MPS = 6.
159
113M11
11M15
Chapter 6
Conclusions - Directions for
further research.
In this final chapter we offer our conclusions in a summarized, chapter by
chapter review of the accomplished work, the new concepts and results. The
neighborhood search using the parallel processor, inspired by the RSP, can be
applied to a variety of other combinatorial problems, such as the TSP, vehicle
routing and job scheduling. The parallel processor can be used to improve on
any existing feasible solution to any of these problems. In the second section
of this chapter we will briefly show how to solve the TSP using the repeated
application of the parallel processor, based on a divide and conquer and an
insertion strategy.
6.1
Summarized Review.
1. In chapter 2 we introduced the Runway Scheduling Problem and made
a projection on the expected delay savings through optimization, for the
case of landings of a single weight category, on a single runway. This
result revealed an expected increase in runway capacity of at least 5 to
10 percent depending on the range of approach speeds.
Given a con-
gested traffic situation this result is expected to have a significant effect
160
on delays.
We also argued that the real time environment of RSP and its dynamic
nature, which requires frequent schedule updates, make the speed of optimal schedule updates a critical issue.
2. In chapter 3 we looked at the more abstract problem of finding an optimal
permutation for a given set ,S, of objects, given a base permutation and
Maximum Position Shift (MPS) constraints.
We examined the effects
of the MPS constraints on the permutation tree (PT). Their application
reduces the tree into the MPS-tree, which can be viewed as an MPSneighborhood of the base permutation. The amount of time required to
search the MPS-tree for the optimal permutation was found to be an exponential function of the number of aircraft. This result motivated the
introduction of the dynamic programming concept of the state space, represented by the state-stage (SS) graph. Finding the optimal permutation
in the PT can be carried out efficiently on the SS-graph by keeping track
of intermediate results and avoiding repeated calculations, at the expense
of additional storage.
We also introduced the concept of the combination graph whose nodes are
all the subsets of the available objects. The SS-graph and the combination
graph are reduced by the MPS constraints to the MPS-graph and the the
MPS combination graph (MPS-CG). In order to calculate the size of the
MPS-tree we devised the bipartite (B-) graph representations of subtrees
of the MPS-tree. The B-graphs revealed stage invariant structures, which
characterize the available object subsets of S corresponding to the MPSsubtrees.
We counted the number, T(MPS), of possible graph types. T(MPS)
was found to be approximately 4 m"PS. The graph types, corresponding
161
to generalized available object set compositions, were related to the label
vectors. An available object set can be determined from its respective
label vector and the stage in which it appears.
3. Chapter 4 begins with a detailed look of the MPS-graph which is the
compact representation of the solution space. The MPS-graph is subsequently transformed into a MPS-CG whose nodes have a special structure.
This node structure allows the stage-wise optimization to be carried out
independently in nodes at the same stage of the MPS-CG.
The discussion of the MPS-CG is completed with issues of symmetry,
relating to the performance of the solution procedure, and with issues
relating to the dynamic nature of the RSP. In this respect, the MPS-CG
is seen as a tentative portion of a global MPS-CG.
The stage invariant structure of the MPS-CG further shows that it can be
folded horizontally into a network, called the parallel processor, PPMPs,
which may used to represent the cross section of the MPS-CG at any stage
d. Since the optimization at stage d can be carried out independently at
each node in the cross-section of the MPS-graph, it can also be done in
parallel, by the parallel processor. By assigning a processing unit to each
of its nodes, appropriately equipped with storage for data we can turn
the parallel processor into a parallel processing machine.
PPMPS is represented by a data storage structure that can be set up
ahead of time and used repeatedly, for a given value of MPS. The
PPMPs algorithm is presented and its performance is analyzed together
with refinements in its implementation.
4. The parallel processor, as presented in chapter 4, can be used in order to
solve the RSP given a single runway and a cost structure that satisfies
the triangular inequality.
162
In chapter 5 we first apply the parallel processor to the case of mixed
takeoffs and landings on a single runway. The triangular inequality (TI)
that is assumed to hold within each operation type will in general be
violated in mixed type situations. The parallel processor is shown to be
able to cope optimally with the situation. The modified operation of the
individual processor, at each stage, results in a generalized state defini*
tion, which is used in turn to analyze the processors' performance. This
performance is further expedited, by exploiting the specifics of runway
scheduling.
In the second section of chapter 5, we examine the implications of multiple runways. The Triangular Inequality Violation (TIV) is shown to
be a possibility even for landings on crossing and open parallel runways.
This possible TIV results in a generalized state definition which is briefly
discussed, but aborted as overly complex, in favor of a simple heuristic
state definition. The same simplified approach is also suggested for most
general case of mixed operations on multiple runways.
In support of the simplified approach we conclude chapter 5 with a randomly generated case study. The results show very eloquently that there
are very substantial increases in the operational capacity of a runway
system containing two close parallel crossing the third runway. The capacity improvements, substantial even for MPS=1, catch up with with
the arrival process for MPS > 4.
6.2
Application Of The Parallel Processor In
Solving The TSP.
The simplifying aspect of RSP lies in the "position shift" constraints, installed to limit the inequity in the treatment of aircraft. As we saw earlier,
163
these constraints define a "neighborhood" of tours, around a given base tour,
0,
7r
containing any re-arrangement iri of
7r,
such that the new position r, of
object i is within MPS = k positions of rf, i.e.
|7rf -
4r|
< k. The constant
k can be viewed as a parameter that determines, together with 7r0 , the size
and content of the solution space. Allowing k to equal the total number, n, of
objects (aircraft, cities, etc.) one recovers the full TSP. We may thus use the
term "k-neighborhood" for a neighborhood with parameter k.
Now the key concept, behind a heuristic procedure for the TSP, is that of a
"divide-and-conquer" strategy, whereby a good base permutation is generated
through successive clusterings of the nodes (cities) in the graph. Each clustering operation results in a graph of fewer nodes and the process is repeated
creating a series of increasingly smaller graphs G, G 2 ,..., G'" until the number
of nodes in G"' is less than 2k. The optimal tour, ^r', in G" lies of course in
the k-neighborhood of any random base tour 8'.
Having found fr'
we can
construct the base tour 8"' for G"'~ by substituting each node in fr"' with
its contents. We then search the k-neighborhood of 8"'-1 to find r"-1 and
repeat this substitution/optimization step until we get fro.
There are several ways of constructing the clusters. The way chosen in our
experiments was through Weighted Non-bipartite Matching (WNBM). To this
purpose we implemented the Primal-Dual WNBM algorithm of Papadimitriou
and Steiglitz [26]. The mated nodes are merged into a single node in the new
graph with the coordinates of the mid-point of the mated nodes. This way we
construct a cost matrix containing the distances of these nodes for Euclidean
graphs. An alternative more general way of creating the cost structure of the
new graph is by defining the cost ck for graph Gk as the minimum of cost of
the arcs in G' connecting the clusters i and
j.
From our tests the resulting ro was found to be the optimal tour for square
grid graphs of up to 225 nodes for a value of k = 5. The results were also
164
very good for random graphs of up to 100 nodes, using the same value of k.
As a performance criterion we used the ratio of the length of irO divided by
the weight of the Minimum Spanning Tree. The resulting ratios were found
to be in the range (1.1, 1.2) which contains the expected value of that ratio.
The test cases, conducted on a Texas instruments "LISP MACHINE", required
computation time of the order of a few seconds.
* Alternatively base tours can be generated using node insertion ideas. Starting from a random base subtour, 1, containing only 2k nodes, optimize it to
get ir and then create an augmented base subtour 0' by inserting in it some
of the remaining nodes of the graph. Repeat this insertion/optimization step
until 7r' is a complete tour.
The new nodes are inserted into ^rk such that the cost of the added edged
minus the cost of the removed edge is minimum. Notice that in general the
insertion of a node will require modifications of the remaining of the tour and
this is why we suggest its optimization with the parallel processor. Notice also
that it may be preferable to insert more than one node in each step.
The quality or the solution from the insertion strategy was also found to be
quite satisfactory.
6.3
Research Directions.
In terms of the Runway scheduling problem, the further research should
include the experimental study of applying the parallel processor in a more
realistic environment, where the following issues are taken into account:
1. Typically every aircraft is known to ATC x minutes before its earliest
time of operation and its schedule needs to be fixed y minutes before its
assigned time of operation.
The average value of x and the chosen value of y are going to affect the
165
quality of solution and consequently a fast time simulation of the complete
runway scheduling process has been created.
In times of congestion we feel that a high value of y will offset the restrictions posed by x. We feel however that some carefully planned study of
the scheduling process should be carried out using this simulation as a
tool.
2. In extending the fast time simulation one should include the conflict free
four dimensional path generation problem. Several options for automatic
path generation are currently under consideration in the Flight Transportation Laboratory.
3. One could also look into the external control aspects of the runway
scheduling process that are offered the human operators, as well as the
potential for customizing the process to particular needs of given ATC
stations.
In terms of the TSP, a challenging project would be to asses the expected
length of the tours produced by our proposed heuristics, and perhaps invent
new ones. One might also want to build a parallel TSP machine, for the parallel
processor, or perhaps implemented in the existing super-computers.
As we said earlier one might be able to solve from scratch, or optimize
existing feasible solutions for vehicle routing and job scheduling problems, and
in general problems amenable to a dynamic programming approach.
With regard to job scheduling we should mention that precedence constraints can be enforced by the parallel processor. In a dynamic situation where
there is a constant in and outflow of jobs the parallel processor may be an ideal
job scheduling tool. A similar situation could be the dynamic scheduling of
taxi cabs, dial a ride, etc.
166
Bibliography
[1] Aho, A.V., J.E. Hopcroft and J.D. Ullman "The Design And Analysis Of
Computer Algorithms" Addison-Wesley 1974.
[2] Baker R. Kenneth "Introduction to Sequencing and Scheduling" John Willey & Sons, Inc. 1974.
[3] Balas, E. and N. Christofides "A restricted Lagrangian Approach To The
Traveling Salesman Problem" Mathematical Programming, 21, 19-46, 1981.
[4] Barnes J. Welsey and Vanston K. Lawrence: "Scheduling Jobs With Linear Delay Penalties And Sequence Dependent Setup Costs" Operations
Research Vol 29 Jan-Feb 1981 pp 146.
[5] Bellman, R. "Combinatorial Processes and Dynamic Programming" Proceedings Of The 10th Symposium In Applied Mathematics Of The American Mathematical Society 1960.
[6] Bertsekas P. Dimitris "Dynamic Programming and Stochastic Control"
Academic Press 1976.
[7] Blumstein, A. "An Analytic Investigation Of Airport Capacity" Ph.D thesis, Cornell University 1960.
[8] Bodin, L., B. Golden and A. Assad " Routing and Scheduling of Vehicles
and Crews: The state of the art" Computers and Operations Research,
10, 62-212, 1983.
167
[9] Conway W. Richard, William L. Maxwell and Miller W. Louis "Theory of
Scheduling" Addison-Wesley 1967.
[101 Christofides, N. "Worst-Case Analysis Of A New Heuristic Of The Travelling Salesman Problem" Appearing In "Algorithms And Complexity: Recent Results And New Directions" 1976.
[11] Crowder, H. and M. Padberg "Solving Large-Scale Symmetric Travelling
Salesman Problems to Optimality" Management Science, 26, pp 495, 1980.
[12] Dear, Roger George: "The Dynamic Scheduling of Aircraft in the Near
Terminal Area"; Cambridge: Massachusetts Institute of Technology, Flight
Transportation Laboratory, 1976. (FTL-report R76-9)
[13] M. A. H. Dempster J. K. Lenstra and A. H. G. Rinnooy Kan "Deterministic and Stochastic Scheduling" Proceedings of the NATO Advanced Study
and Research Institute on Theoretical Approaches to Scheduling Problems
held in Durham, England, July 6-17, 1981, D. Reidel Co.
[14] French, Simon "Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop" Ellis Horwood Ltd - Chichester 1982.
[15] Garey, R. Michael and David. S. Johnson " Computers and Intractability:
A Guide to NP-Completeness" W.H,Freedman and Co. 1979.
[16] Golden, L. Bruce and Arjang A. Assad "Perspective on Vehicle Routing:
Exciting New Developments" survey, College of Business and Management, University of Maryland, August 1986.
[17] Graves, C. Stephen "A Review Of Production Scheduling" Operations
Research Vol. 29 - 1981, pp 646.
168
[18] Held, M. and Karp, R.M. "A Dynamic Programming Approach to Sequencing Problems" Journal of SIAM, Vol. 10, pp 178, 1962.
[19] Karp M. Richard "Probabilistic Analysis Of Partitioning Algorithms For
The Tsp In The Plane" Math. Of Operations Research TIMS-ORSA Aug 1977.
[2p] Kolen, A., A. Rinnooy Kan and H. Trienekens "Vehicle routing with time
windows" Operations Research, forthcoming, 1986.
[211 J. K. Lenstra and A. H. G. Rinnooy Kan "Scheduling Theory Since 1981:
An Annotated Bibliography" Mathematisch Centrum, Aug. 83, Amsterdam.
[22] Lin, S. And B.W. Kernrighan, "An Effective Heuristic For The Travelling
Salesman Problem" Operations Research Vol.21, No. 2, 1973.
[23] Liu, C.L. "Introduction To Combinatorial Mathematics" McGraw-Hill
1968.
[24] Nicholson, T.A.J. "A method for optimizing permutation problems and its
industrial applications", in: Welch, D.J.A. (ed.), "Combinatorial Mathematics and its Applications", Academic Press 1971.
[251 Odoni, Amedeo R. "An Analytic Investigation Of Air Traffic In The Vicinity Of Terminal Areas" MIT Operations Research Center Technical Report
No.46, Dec. 1969.
[26] Papadimitriou and Steiglitz "Combinatorial Optimization - Algorithms
and Complexity." Prentice Hall, 1982.
[27] Pararas, D. John: "Decision Support Systems For Automated Terminal
Area Air Traffic Control"; Cambridge: Massachusetts Institute of Technology, Flight Transportation Laboratory, 1982. (FTL-report R82-3)
169
[28] Rue, C. Robert "The Application Of Semi-Markov Decision Process To
Queueing Of Aircraft For Landing At An Airport". Transportation Science
Vol. 19, May 85, pp 154.
[29] Pardee, R.S. "An Application Of Dynamic Programming To Optimal
Scheduling In The Terminal Area Air Traffic Controll System" TRW Computers Co., 19XX.
[30] Parker G. Robert, R. H. Deane and R. A. Holmes " On The Use Of
A Vehicle Routing Algorithm For The Parallel Processor Problem With
Sequence Dependent Changeover Costs" - AIIE Transactions, June 1977.
[31] Psaraftis, Harilaos: "A Dynamic Programming Approach to the Aircraft
Sequencing Problem"; Cambridge: Massachusetts Institute of Technology,
Flight Transportation Laboratory, 1978 (FTL-report R78-4)
[32] Psaraftis, Harilaos: "An Exact Algorithm For The Single Vehicle ManyTo-Many Dial-A-Ride Problem With Time Windows" Transportation Science, Vol. 17, pp 351, 1983.
[33] Reiter, S. and Sherman, G. "Discrete Optimizing" Journal of SIAM, Vol.
13, pp 864.
[34] Rinnooy Kan, H. G. Alexander "Machine Scheduling Problems" Martinus
Nijhoff, The Hague 1976.
[35] Savelsberg, M. "Local Search In Routing Problems With Time Windows"
Technical Report 8409, Center For Mathematics And Computer Science,
Amsterdam 1984.
[36] Sexton, T. "The Single Vehicle Many-To-Many Routing And Scheduling Problem" Ph.D. Dissertation, State University of New York at Stony
Brook, 1979.
170
[37] Simpson, W. Robert: "Critical Examination of Factors which Determine
Operational Capacity of Runway Systems ar Major Airports " ; Cambridge: Massachusetts Institute of Technology, Flight Transportation Laboratory, Nov-1984.
[38] Solomon, M. " Vehicle Routing and Scheduling With Time Wondow Constraints: Models And Alggorithms" Ph.D Dissertation, Dept. of Decision
Sciences, University of Pennsylvania, 1983.
[39} St. George, J. Martin "Congestion Delays at Hub Airports" ; Cambridge:
Massachusetts Institute of Technology, Flight Transportation Laboratory,
June 86, FTL report R86-5.
[40] V6lckers, Uwe "Computer Assisted Arrival Sequencing And Scheduling
With The "COMPASS"-System".
DFVLR - Institut Fir Flugfihrung
Braunschweig, Germany, presented at seminar on Informatics in Air Traffic
Controll Oct 1985, Capri, Italy.
171
Download