Scheduling Parallel Task

advertisement
Scheduling Parallel Task
p. 1
Scheduling Parallel Task
By
Atena Daneshmandi
Scheduling Parallel Task
p. 2
ABSTRACT:
Scheduling is an elderly issue that aggravated a lot of researches in different fields. In the
parallel task area, this problem is an important issue for determining the starting times of the
tasks and the processor locations.
Scheduling in new parallel systems is greatly more complex because of new
distinctiveness of parallel systems. Recently, supercomputers have been replaced by collections
of huge quantity of standard components, actually remote from each other and mixed. The
requirements of efficient algorithms for organization parallel task are a central problem for a
more common utilize. Nowadays, the requirement of sufficient software tools is the main
impediment for using these dominant systems to solve huge and complicated definite
applications.
A parallel system while they are running can be visualized as a set of parallel components
tasks that are executed according to several preference relationships. Resourceful scheduling of
tasks allows obtaining complete improvement of the computational control provided by a
multiprocessor coordination. It contains the project of incompletely controlled tasks against the
system structural design processing mechanism.
In this paper discuss the parallel task scheduling in a number of tasks in a multiprocessor
or multicomputer system. The representation assumes that the system consists of a number of
equal processors and tasks can implement on a processor at the same time. All of the schedules
and tasks are not preemptive. The famous Graham’s list-scheduling algorithm (LSA) is
contrasted with using a planned roundabout decode illustration.
Scheduling Parallel Task
p. 3
INTRODUCTION:
A parallel program is a gathering of tasks, some of them have got to be accomplished
previous to others start in on. The superiority associations among tasks are usually delineated in
a directed graph acknowledged as the WDVN�JUDSK. Nodes in the diagram categorize tasks
and their period and arcs stand for the superiority connection. Actually such as number of
processors, number of tasks and task precedence make harder to verify a high-quality task. The
issue is to obtain a schedule on P > 2 processors of equivalent capability. This is decreased the
complete processing time of self-determining tasks has been shown as belonging to the NPcomplete class (Horowitz and Sanhi 1976).
Task scheduling can be confidential as VWDWLF and G\QDPLF. Some powerful
reasons build static scheduling advantageous. Originally, fixed scheduling occasionally
consequences in minor execution times other than dynamic development. Second stationary
scheduling allows simply one process per processor. It is reducing process construction,
management and execution transparency. At the end fixed scheduling can be used to avoid the
get going. A specific parallel algorithm is on an objective machine.
Discussion about Parallel Tasks:
The proposal after parallel tasks is to consider a different meant for dealing with
connections, particularly for huge delays. For lots of applications, the users have good quality
understanding of their performance. This qualitative information is usually good enough to direct
the parallelization.
Informally, a PT is a task that gathers uncomplicated operations, usually an arithmetic
schedule or a nested round, which contains itself sufficient parallelism to be executed with
Scheduling Parallel Task
p. 4
multiple processor. This observation is universal than the standard case and contains the
sequential tasks as a particular case. The issue of scheduling PT is difficult to solve. There are
differentiate two ways for building parallel tasks:
Parallel tasks are parts of a huge similar application. Typically analysis is similar as the
time of each schedule can be expected reasonably specifically with priority connecting parallel
tasks.
Parallel Tasks as independent tasks in a multiuser framework. Typically, parallel tasks
are submitted at any time. The time for each parallel task can be expected depending on the sort
of applications.
The parallel task model is mainly modified to universal computing. A huge
communication delays are not unambiguously like they are in all standard models. The
hierarchical character of the execution support which can be naturally expressed in parallel task
model and the capability to respond to conflict of the input parameters.
The distribution of lots of parallel tasks in parallel environments shows that
multiprocessors is a complex and significant problem in computer systems. Allocation is
attempting to reduce make span. Individual processor consumption or denseness of load
allocation can be measured.
In many cases will be comfortable with polynomial time scheduling algorithms, which is
offering excellent solutions. The list-scheduling algorithm (LSA) satisfies requirement for
offering the excellent solutions.
Scheduling Parallel Task
p. 5
Typology of Parallel Tasks:
There is numerous versions of parallel tasks based on execution in a parallel and
distributed system (Drozdowski’s of book): Rigid when then umber of processors to execute the
parallel task is fixed a priori. This number can either be a power of 2 or any integer number. In
this case, the PT can be represented as a rectangle in a Gantt chart. The allocation problem
corresponds to a strip-packing problem.
The majority parallels programming tools or languages have some flexibility support,
with dynamic addition of processing nodes support. However, most of the time pattern ability or
must be in use unambiguously into account by the submission designers as computing power will
be removed.
The main constraint is the requirement for competent scheduling algorithm to estimate
the parallel completing time as function of the number of processors. The user has this
knowledge most of the time but this is a disinterest issue beside the extra methodical use of
models.
Task Graphs:
In this section shortly explain how to obtain and handle parallel task. Consider two types
of parallel tasks matching respectively to private jobs and to applications composed of large
tasks.
The reason is not to discuss the representation of applications as graphs for the
parallelization. It is known that obtaining a representative purpose from any implemented in a
high level programming language, which is complex. The diagram is suitable and may be
declined in numerous behaviors. Usually, the coding of an application can be represented by a
Scheduling Parallel Task
p. 6
directed acyclic graph where the vertices are the instructions and the edges are the data
dependencies [M. R. Garey].
In a typical application, such a graph is composed of tens, hundreds, or thousands of
tasks, or even more. Using symbolic representation such as in [M. R. Garey], the diagram may
be managed at compile time. Otherwise, the graph must be built on-line, at least partially, at the
execution phase. A moldable, or malleable, task graph is a way to gather elementary sequential
tasks, which will be handled more easily as it is much smaller.
There are two major ways to develop a task graph of parallel tasks either the user has a
relatively good knowledge of its application and is able to provide the graph, or the graph is built
automatically from a larger graph of sequential tasks generated at runtime. [M. R. Garey]
The most important control is the prerequisite for experienced scheduling algorithm is
implementation time as function of the number of processors. The user has this knowledge most
of the time but this is a disinterest issue beside the extra methodical use of models.
DETERMINISTIC MODEL:
In a deterministic model, the implementation time for each task and the priority contact
among them are recognized in progress. In example below we see the parallel tasks:
Scheduling Parallel Task
p. 7
In this task graph is a simplify illustration of a parallel program execution. It provides a
foundation for static allocation of processors.
In a Gantt chart, the initiation and ending times for each task running on the available
processors are indicated and the make span (total execution time of the parallel program) of the
schedule can be easily derived.
Scheduling Parallel Task
p. 8
As we can see various simple scheduling problems can be solved to optimality in
polynomial time when the rest can be computationally difficult.
As we are interested in the scheduling of arbitrary tasks graphs onto a reasonable number
of processors we would be content with polynomial time scheduling algorithms that provide
good solutions even though optimal ones cannot be guaranteed.
Complexity:
The main complexity results linked with the problems we are considering is NP-hard.
The rigid case has been deeply studied in the survey [Garey].
All of the complexity verification for the inflexible container connecting only
chronological tasks which can be extensive to the moldable case on any number of processors.
All the problems are NP-hard.
Most of the algorithms have a small complexity and thus, may be implemented in actual
parallel programming environments. For the moment, most of them do not use the moldable or
malleable character of the tasks, but it should be more and more the case. We did not discuss in
this chapter how to adapt this model to the other features of the new parallel and distributed
systems: It is very natural to deal with hierarchical systems [P. Brucker].
Scheduling Parallel Task
p. 9
Conclusion:
In this paper, it is obtainable an appealing model for scheduling professionally
applications on parallel and distributed systems based on parallel tasks. It is a different to straight
computational models mainly for huge systems. It is possible to approach an approximation
scheduling algorithms for the different types of parallel tasks for both off line and on line cases.
All these cases communicate to systems where the communications are rather slow, and
versatile.
The mixed individual is more complex because most of the methods unspecified the
repetitiveness of the parallel tasks. The execution time does not depend on the number of
processors selected to it. It depends on the set of processors, as all the processors capacity be
dissimilar.
In order programming are unsuccessful in the processing ability of growing huge multicore systems.
Based on worst-case task set individuality a task change that can be performed by the
operating system schedule. The task extend change is simply implementable on normal operating
systems. Parallel tasks through execution will be effect in improved system operation.
Scheduling Parallel Task
p. 10
References
J. Blaz ̇ewicz, K. Ecker, E. Pesch, G. Schmidt, and J. Weglarz. Scheduling in Computer and
Manufac- turing Systems. Springer-Verlag, 1996.
M. Pinedo. Scheduling: Theory, Algorithms, and Systems. Prentice-Hall, Englewood Cliffs,
1995.
P. Brucker. Scheduling. Akademische Verlagsgesellschaft, Wiesbaden, 1981.
H. Shachnai and J. Turek. Multiresource malleable task scheduling to minimize response time.
Information Processing Letters, 70:47–52, 1999.
F. Afrati, E. Bampis, A. V. Fishkin, K. Jansen, and C. Kenyon. Scheduling to minimize the
average completion time of dedicated tasks. Lecture Notes in Computer Science, 1974, 2000.
D. Hochbaum, editor. Approximation Algorithms for NP-hard Problems. PWS, September 1996.
R.P.Brent.Theparallelevaluationofgeneralarithmeticexpressions.JournaloftheACM,21(2):201–
206, July 1974.
M. Drozdowski. Scheduling multiprocessor tasks—an overview. European Journal of
Operational Research, 94(2):215–230, 1996.
B. Monien, T. Decker, and T. Lu ̈cking. A 5/4-approximation algorithm for scheduling identical
malleable tasks. Technical Report tr-rsfb-02-071, University of Paderborn, 2002.
J. Turek, J. Wolf, and P. Yu. Approximate algorithms for scheduling parallelizable tasks. In 4th
Annual ACM Symposium on Parallel Algorithms and Architectures, pages 323–332, 1992.
A. Steinberg. A strip-packing algorithm with absolute performance bound 2. SIAM Journal on
Computing, 26(2):401–409, 1997.
M. R. Garey and R. L. Graham. Bounds on multiprocessor scheduling with resource constraints.
SIAM Journal on Computing, 4:187–200, 1975.
Download