Document 12915557

International Journal of Engineering Trends and Technology (IJETT) – Volume 28 Number 1 - October 2015
Workstation Clusters for Parallel Computing
Shrinkhala Singhania, Monika Tak
B TECH Student, Computer Science Department, Vellore Institute of Technology, Vellore
Vellore, India
Abstract-- This paper talks about how workstation
clustering for parallel computing is a better and
viable option when compared to the traditional
supercomputing for parallel computing. This paper
tells us about the numerous developments that have
taken in this field and how workstation clustering is
becoming practical and the most considered option
Keywords — parallel computing, clusters, UNIX,
Myrinet, Ethernet.
Parallelism is obtained on distributed memory
systems with multiple copies of parallel runs on
different nodes, sending messages to each other to
coordinate computation. An alternative to the
traditional supercomputing has been suggested in this
paper. The working, the various requirements, etc.
have been discussed in detail in the section below.
Workstations have increasingly become popular
alternative to traditional parallel computing. Parallel
computing has come a long way in the past ten years
and parallel implementations of scientific simulations
codes are now in widespread use.
There are two dominant parallel hardware/software
architectures in use today:A. SHARED MEMORY
In systems implementing shared memory, memory is
accessed by all processors. And also parallel
processing occurs through use of shared data
structures or by eliminating message passing
In this system memory is not shared but a number
of interconnected computational nodes communicate
on high performance network.
A workstation is built from several workstations
networked together using high performance
interconnect of some kind. Also another important
factor is how well it’ll integrate with existing
computing environment. The balance of processor
speed and interconnect speed is an important
consideration when building a workstation cluster.
A major advantage that these workstations have
over the traditional supercomputers is that common
components are required for building. Thus they have
performance /price advantage. And also with
ISSN: 2231-5381
workstations becoming cheaper clustering plays an
important role. The biggest drawback of traditional
parallel supercomputing is it always requires “special”
new skills and also additional training. Also the
traditional supercomputing have smaller selection of
software packages.
Planning a cluster contains two important roles.
They are choice of workstation and interconnect.
The goal of portable parallel computing is satisfied
by two basic and widely used parallel message passing
systems, message passing interface (MPI) and parallel
virtual machine (PVM). These two techniques can be
implemented on both the workstation clusters and
traditional supercomputers. These run on most UNIX
Applications that do a lot of message passing, have
little tolerance to message passing literacy and the
ones that require higher performance connect can run
on workstation clusters. It’s easier to design a work
cluster tuned for a specific type of application rather
than for broader general purpose tasks.
A parallel workstation cluster must meet
performance requirements beyond those required in
most general purpose computing environments. To
understand why, one must consider how parallel
computing software is typically used:
-clusters should perform as a parallel computing
resource having higher performance.
-nodes in a cluster are always used in groups
-If servers are dedicated for use, it is easier to
coordinate software upgrades
The factors considered when choosing hardware for
cluster compute nodes are processor speed, cache size,
memory bandwidth, memory capacity, network
bandwidth, network latency, etc. the individual effects
of these depend on requirement of applications.
And once the application and hardware
requirements are taken care of integration into
computer environment and maintainability are
Physical space limitations, power, and cooling are
all considerations when building large clusters. Power
and cooling capacity can become a problem when
building large clusters, and is a site-specific issue for
which there aren’t many short cuts.
It is worth considering what the lifespan of a
cluster’s compute nodes will be, and what to do with
them when they are no longer fast enough for the
intended applications. One strategy that has been
successfully employed by several institutions is to
recycle compute nodes as desktop workstations after
approximately two years of service. Two years is
Page 13
International Journal of Engineering Trends and Technology (IJETT) – Volume 28 Number 1 - October 2015
enough time to allow for a doubling in processor
speed in newly purchased equipment and is short
enough that recycled compute nodes will still be
viable desktop computers.
The most important thing that has to be taken care
when we work with the clusters is the cluster
Some clusters use more sophisticated networking
components such as gigabit Ethernet or Myrinet,
which provide increased bandwidth, decreased latency,
or both.
Some institutions have made creative use of
multiple network interfaces per node and multiple
switched networks as an inexpensive alternative to
gigabit Ethernet and Myrinet for achieving improved
bandwidth and latency while retaining the
price/performance advantages of commodity hardware.
The cluster file server or "master node" is usually
installed with the full complement of development
tools, libraries and other software. This is similar to
The server often contains a significant amount of
local disk storage. This space is made available for
cluster users as a temporary storage area for large data
files and programs. The storage area on the cluster
server is visible to all compute nodes, and is the only
shared storage area available to all the cluster’s nodes.
In normal use, cluster users copy data files and
programs binaries to the storage area during execution
of jobs on the cluster. While jobs are running, data
may be read from and written to this area. The
contents of the storage are not automatically erased on
a regular basis; however, there are no guarantees made
as to the long-term availability of data left on the
cluster server.
Bernd Freisleben and Thilo Kielmann “Approaches to
Support Parallel Programming on Workstation Clusters: A
Mounir Hamdi “Parallel Computing on an Ethernet Cluster
of Workstations: Opportunities and Constraints” The Journal
of Supercomputing,.
J. Hollingsworth, “Parallel Computing- Systems and
Applications” book.
M. Aldinucci, M. Danelutto, P. Kilpatrick, and M. Torquati,
“FastFlow: high-level and efficient streaming on multi-core,”
in Programming Multi-core and Many-core Computing
Systems, S. Pllana and F. Xhafa, Eds., Wiley, 2014.
M. Aldinucci, M. Drocco, G. P. Pezzi, C. Misale, F. Tordini,
and M. Torquati, “Exercising high-level parallel
programming on streams: a systems biology use case,” in
Proc. of the 2014 IEEE 34th Intl. Conference on Distributed
Computing Systems Workshops (ICDCS), Madrid, Spain,
It is very clearly evident that workstation clusters
are a much better option when compared to the
traditional supercomputing. They are more viable and
also less cumbersome to handle. The maintenance cost,
the requirements and the working procedure of this
supercomputing technique is way less than the
traditional ones. This paper signifies the importance of
parallel supercomputing through supercomputers.
I would like to thank Mr. Senthil J, Assistant
Director Academics (Systems), VIT University for
guiding us through the research on this paper.
John Stone BS/MS in Comp. Sci. from the Univ. of MissouriRolla. He is a Senior Research Programmer with the
Beckman Institute for Advanced Science and Technology
(University of Illinois) and Dr. Fikret Ercal,the University of
Missouri-Rolla , paper on workstation clusters.
ISSN: 2231-5381
Page 14