Full Description

advertisement
1. Title of subject
Distributed and Parallel Computing
2. Subject code
TDP3471
3. Status of subject
Major
4.
Version
Date of Previous Version: Year 2006
Date of New Version : December 2007
4. Credit Hour
3
28 Hours of lecture
28 Hours of tutorial lab
LAN’s Credit Hours Equivalent: 3.00
5. Semester
Trimester 3 (Delta Level)
6. Pre-Requisite
Operating Systems and Internetworking (TOS2581)
7. Methods of teaching
28 Hours of Lecture
28 Hours of Lab
8. Assessment
60 % Coursework
40 % Final Exam
Total 100%
9. Teaching staff (Proposed)
Dr. Ho Sin Ban
Tong Hau Lee
10. Objective of subject
The objective of the subject is to provide an integrated view
of the various aspects of the distributed and parallel
computing. The fundamental concepts underlying these
technologies are introduced. The hardware architectures,
programming languages and models, software development
tools, software engineering concepts and design patterns,
are presented and practised.
1.
11. Synopsis of subject
2.
3.
4.
To enable the student to build and reason about
distributed and parallel applications.
To teach the theory and practice of distributed and
parallel systems.
To expose students to the technologies of contemporary
distributed middleware (COBRA, DCOM, and Web
Services).
To teach students to develop parallel applications using
Message Passing Interface (MPI) programming and
Shared Memory programming.
Sinopsis Kursus
1.
2.
3.
4.
12. Learning Outcomes
Untuk
membolehkan
pelajar
memahami
serta
membangunkan aplikasi selari dan teragih.
Untuk mengajar teori dan latihan berkenaan sistem selari dan
teragih.
Untuk meningkatkan kemahiran pelajar kepada teknologi
kontemporari pengagihan aras-tengah (Perkhimatan Jaringan,
Komputer Grid, Peralatan Globus).
Untuk membolehkan pelajar membangunkan applikasi selari
menggunakan pengaturcaraan antaramuka penghuluran
pesanan (MPI).
At the completion of the subject, students should be able to:
1. Understand the concept of parallel computing and
distributed computing.
2. Understand the forming criteria and security conditions
of distributed computing.
3. Identify some solving techniques in parallel systems.
4. Able to develop a parallel environment using MPI and shared
memory programming techniques.
5. Able to develop a distributed environment using some
popular programming languages.
Programme Outcomes
Ability to apply soft skills in work and career
related activities
Good understanding of fundamental concepts
% of
contribution
5
30
Acquisition and mastery of knowledge in
specialized area
20
Acquisition of analytical capabilities and
problem solving skills
30
Adaptability and passion for learning
5
13. Details of subject
Cultivation of innovative mind and
development of entrepreneurial
skills
5
Understanding of the responsibility with moral
and professional ethics
5
Topics Covered
1.
2.
3.
4.
5.
6.
7.
8.
9.
Introduction
Definition of a Distributed System, Goals,
Hardware concepts, Software concepts, The
client-server model.
Definition of Parallel Processing. The demand
for computational speed, types of parallel
computers, architectural features of messagepassing multi-computers, networked computers
as a multi-computer platform, multi-core
computers as a potential for increased
computational speed.
Communication
Layered protocols, Remote Procedure Call,
Remote Object Invocation, Message-oriented
communication, Stream-oriented
communication.
Processes
Threads, Clients, Servers, code migration,
software agents
Naming
Naming entities, Locating mobile entities,
Removing unreferenced entities.
Consistency and Replication
Data-centric consistency models, client-centric
consistency models, distributed protocols,
consistency protocols, Examples
Fault Tolerance
Process resilience, reliable client-server
communication, reliable group communication,
distributed commit, recovery.
Distributed Object-Based Systems
CORBA, COM+, JavaRMI, Web services,
Comparison of CORBA, COM+, JavaRMI and
Web services.
Parallel Algorithms Basics
Amdahl's law, super parallel, embarrassingly
parallel, partitioning, divide and conquer.
Message Passing Interface (MPI)
History of MPI, goals of MPI, structure of an
MPI program, message passing, timing in MPI,
collective communication, derived data types
Hours
2
2
2
2
2
2
2
2
4
10
11.
12.
14. Text
Shared Memory Computation
Basics of shared memory programming,
memory coherence, race conditions and
deadlock detection, synchronization, threading
using pthread, OpenMP and Java threads.
Pipelined Computation
Pipeline technique, computing platform for
pipelined application, pipeline program
examples (adding numbers, sorting numbers,
prime number generation, solving a system of
linear equations)
Synchronous Computation
Synchronization (barrier, counter
implementation, tree implementation, butterfly
barrier, local synchronization, deadlock),
synchronized computation, synchronous
iteration program examples
Total Contact Hours
Laboratories:
GRID submission, MPI programming,
OpenMP programming, pthread programming
and Web Services.
Text Books
4
2
2
28
28
1. Distributed Systems: Principles
Paradigms, Andrew S. Tannenbaum
Maarten van Steen, Prentice Hall, 2002.
and
and
2. Barry Wilkinson & Michael Allen,
"PARRALLEL PROGRAMMING", Prentice
Hall, 1999.
Reference
Books
1. Elements of Distributed Computing, Vijay K.
Garg, Wiley, 2002.
2. The Grid: blueprint for a New Computing
Infrastructure, Ian Foster and Carl Kesselman,
Elsevier Morgan Kaufmann Publishers, 2004.
Download