I hope that several of you will consider taking my course this Fall in
PARALLEL AND DISTRIBUTED COMPUTING CS 6/73995
Some additional information about this course is provided below.
Currently, programming is undergoing a transition from an environment in which most computers and computation are sequential to an environment where most computers will involve multi-core chip technology. Optimal use of these multi-core chips will require the use of parallel computation. Programs that cannot exploit parallel computation that utilize these multi-core chips are not expected to realize any performance improvement either now or in the future, due to the fact that the clock cycle is not expected to increase significantly for sequential computers. As a result, use of parallel computation by professional programmers is almost certain to switch from being an option to becoming a necessity.
SOME FEATURES OF THIS COURSE:
Includes coverage of the fundamental concepts of parallel computation rather than focusing primarily on the latest trends, which are often quickly outdated due to the rapid changes in technology in this area.
Covers the principal types of parallel computation by investigating three key features for each: typical architectural features, typical programming languages, and algorithm design techniques.
Also covers the currently popular cluster architecture and the MPI language normally used with clusters. (This is the primary focus of the textbook selected for this course.)
I expect to add additional material on parallel programming that will cover the basics of one or more other programming languages, such as the CUDA language for the NVIDIA GPU architectures.
SOME MAJOR TOPICS IN COURSE:
General concepts for parallel computation
Study of asynchronous (MIMD) computation by investigating typical architectures, examples of specific computers, and algorithm design techniques for MIMD computation.
Understanding the MPI language and creating programs for MIMDs
(particularly clusters) using MPI.
Study of synchronous (e.g., SIMD and multi-SIMD) computation through consideration of typical architectures, examples of specific computers, coverage of HPF and ASC programming languages, and techniques for design of algorithm.
Study of computation using interconnection networks such as the 2D mesh, hypercube, etc.
A comparison of advantages and disadvantages of asynchronous and synchronous computation.
ADDITIONAL BENEFITS:
While the principal focus is on parallel computation, most of the information covered is also applicable to distributed computing
There is a wide choice of topics for a thesis or dissertation for those who are interested in working in parallel and distributed computing. Also, there are several professors in the department that are working in this area
Students working on a thesis or dissertation in another area which would benefit from the use of parallel and distributed computing will definitely find the knowledge covered in this course useful.
Most computational intensive or large computational problems require a parallel or distributed system in order to satisfy the speed and memory requirements for these types of problems.
PREQUISITES: This course is designed to be accessible to all graduate students in computer science.
TEXTBOOK: Parallel Programming in C with MPI and OpenMP by Michael
Quinn, McGraw Hill, 2004. The classroom slides will include considerable information from a wide range of other sources. Additional reference material will be provided, when needed.
TIME: The scheduled time for this course is 2:15-3:05 on Monday,
Wednesday Friday in Room 276 in the MCS building.
REQUEST: If you know another graduate student who may be interested in this course, please bring this information to their attention, as they may not have received this email.
Thanks,
Johnnie Baker