Collaborative Visualisation over the Access Grid using the ICENI Grid Middleware

advertisement
Collaborative Visualisation over the Access Grid
using the ICENI Grid Middleware
Gary Kong, Jim Stanton
Steven Newhouse, John Darlington
London e-Science Centre, Department of Computing
Imperial College London, London SW7 2AZ
lesc-staff@doc.ic.ac.uk
Abstract
In this paper, we demonstrate a method of using the ICENI Grid middleware as a
mechanism for deploying fully extensible collaborative visualisation environments.
Various collaboration scenarios can be composed according to session requirements
by means of a component-based Grid system, which includes display capabilities,
steering capabilities and the ability for graphic output streaming over the Access
Grid.
Keywords
Grid, middleware, ICENI, collaborative, visualisation, steering, Access Grid
1. Introduction
2. The ICENI Grid middleware
The federations of high performance
computational resources represented by
Computational
Grids
are
providing
application scientists with new opportunities
for accessing and using such resources. New
ways of working are emerging, in particular
in relation to real time visualisation and
steering of running applications, whether by
an individual investigator co-located with the
simulation or by groups of collaborating
investigators distributed across multiple sites.
However, due to the geographically
distributed
nature
of
collaborative
visualisation activities, sessions tend to lack a
sense of human presence and face-to-face
interaction. In this paper, we demonstrate a
method of using the ICENI Grid middleware
[1][2] as a mechanism for deploying fully
extensible
collaborative
visualisation
environments,
which
integrates
the
visualisation tools within ICENI with the
Access Grid video conferencing technology
[3] using the Chromium distributed graphics
rendering framework [4][5]; in order to
provide a richer and more comprehensive
integrated collaborative environment.
ICENI, the Imperial College e-Science
Networked Infrastructure, is a Grid
middleware system providing mechanisms
for creating and managing computational
Grids and for designing and deploying
applications onto these Grid resources. It
adopts a service-oriented architecture with
well-defined service interfaces to separate the
system
functionality
from
specific
implementing
technologies.
Currently
reference implementations have been
produced based on Jini, JXTA and OGSA
respectively [6].
A component based application model is
used, whereby domain specific knowledge is
encapsulated within clearly defined software
components. Complete applications are then
defined by composing one or more of these
components, enabling application scientists to
combine their expertise with those of
specialists in other fields, such as numerical
analysis or scientific visualisation. Intelligent
schedulers deploy these onto appropriate
resources according to performance criteria
and service level agreements associated with
the hardware and software resources
available [7]. See Figure 1.
Figure 1, illustrates a
composition of ICENI Grid
components using the dragand-drop NetBeans
Application Framework.
Components that are relevant
to the current session are
dragged onto the desktop
with the appropriate
component connections
established before being
deployed to the appropriate
resources.
ICENI also supports dynamic extensions,
allowing new components to be instantiated
and connected into an existing deployed
application. This provides the investigator
with highly configurable mechanisms to
interact with the application, whether to
modify the existing computation or to add
new processing streams.
3. Extensible deployment and collaborative
interaction
The ICENI Grid middleware therefore
provides a versatile framework within which
to compose and deploy applications and
collaborative technologies. During the
lifecycle of a simulation, the extensible
deployment capabilities can be used to set up
different configurations of components to
satisfy different collaborative needs.
On launching the simulation the investigator
may wish to verify that the initial conditions
have been correctly set and the computations
are progressing as intended. Steering and
visualisation components can therefore be
deployed and connected to the simulation and
if necessary adjustments made to relevant
parameters. Once satisfied that the
computation is progressing correctly these
components would be disconnected to
eliminate unnecessary overhead on the
simulation.
At a later stage the investigator may wish to
set up a new interactive session with the
simulation and allow collaborative workers to
share in this interaction. Service level
agreements associated with the software
components define and control connection
capability and typically these would be set
with a restrictive connection policy. At the
appropriate time these would be modified by
the investigator to allow collaborators to
connect. Multiple steering and visualisation
components can then be configured to
provide collaborative interaction sessions
between groups of trusted investigators.
The basic ICENI graphics components are
based on the Visualisation Toolkit [8] and
provide independent renderings of the data
being exported. The collaborative users may
therefore view different data sets or different
renderings of the same data set, but these will
not be synchronised to each other. Additional
components have therefore been developed to
satisfy use cases where such synchronisation
is desirable and to provide a richer
collaborative
environment
for
the
investigators.
Figure 2, illustrates a collaborative
visualisation/steering session
deployed from the component
composition in figure 1 along with the
Access Grid environment.
4. Chromium – distributed graphics
framework
Chromium is an open-sourced graphics
framework that enables OpenGL based
graphics to be rendered efficiently over a
distributed network or cluster. The graphics
framework allows configuration of rendering
from either a single OpenGL based
application or in parallel for workload
sharing, to a number of configurable output
setups, e.g. broadcasting to multiple
rendering nodes, or splitting the output
stream into a number of sections for output
on high resolution tiled-display systems. The
basic working behind the framework involves
intercepting OpenGL calls made to the
system by the application, these are then
passed onto ‘Stream Processing Units’
(SPU); each SPU performs a specific
function by overriding default GL calls and
they can be chained together to create an
overall effect. These SPUs can then be
extended to perform any custom functions,
such as the `StereoSPU’ that is used for
generating passive stereo displays. The
various chromium runtime modules are
wrapped as ICENI Grid components (icenichromium-display components) that users
can use as a basis for composing and
configuring the display requirements of a
visualisation session. See figure 2.
Furthermore, Chromium can be configured
with an ANL-developed stream processing
unit (FLX [9] and FLXmitter [10]) to output
its graphics as an h261 video stream and sent
to a multicast address using a networking
framework
called
the
Adaptive
Communication Environment (ACE) [11];
this effectively provides a bridging between
the
distributed
graphics
rendering
infrastructure Chromium provides and the
Access Grid. See figure 3.
5. The Access Grid
The Access Grid is an open-sourced video
conferencing tool that uses IP multicasting
technology to efficiently transmit/receive
audio and video streams over IP networks.
The Access Grid toolkit contains three main
parts: a video tool, an audio tool and a textbased chat tool. Meetings are conducted in
virtual meeting rooms known as ‘virtual
venues’. The Access Grid has been widely
adopted by the academic community and also
in the business world; a realisation that it is a
tool that offers real savings in terms of time
and money, meetings that would otherwise be
near impossible and extremely costly to
conduct face-to-face. In terms of a
collaborative visualisation point of view,
most activity of this type generally involve
sites that are geographically distributed and
hence lack a sense of human presence and
interaction, the ability to integrate the Access
Grid with such visualisation systems not only
provides a basis for instant feedback
Figure 3, illustrates two visualisation
graphics output as h261 video streams
directly onto the AccessGrid, allowing
users who do not have the hardware or
software capabilities to participate in a
visualisation session and to view the
graphics output.
but also improves on the overall HCI quality
of the session. Through the ICENIchromium-display
component
run-time
options, a visualisation graphic output can be
configured to stream directly onto the Access
Grid either as a single h261 video stream or
as a series of tiled sections, which can then be
reassembled, automatically or manually, at
the receiver’s end. This allows users who do
not have the hardware or software
capabilities to participate in a visualisation
session and to view the graphics output. The
various Access Grid modules can also be
wrapped as ICENI components, allowing
them to be composed within a collaborative
visualisation session and automatically
deployed to host machines that are connected
to the Grid middleware. See figure 3.
5. Conclusion
The component application model used by
ICENI and its ability to instantiate and
connect new components to a running
application provide a versatile and flexible
mechanism for integrating applications with
advanced collaborative technologies. This has
been demonstrated using components based
on Chromium and the Access Grid to provide
a Grid-enabled interactive collaborative
environment.
[1] N. Furmento, W. Lee, A. Mayer, S. Newhouse, J.
Darlington. ICENI: An Open Grid Service
Architecture implemented in Jini.
Supercomputing 2002, Baltimore, USA,
November 2002.
[2] http://www.lesc.ic.ac.uk/iceni
[3] G. Humphreys, M. Houston, Y. Ng, R. Frank, S.
Ahern, P. Kirchner, J.T. Klosowski, Chromium:
A stream processing framework for interactive
graphics on clusters.
[4] http://www.sourceforge.net/projects/chromium
[5] http://www.accessgrid.org
[6] N. Furmento, J. Hau, W. Lee, S. Newhouse, J.
Darlington. Implementation of a ServiceOriented Architecture on top of Jini, JXTA and
OGSA. UK e-Science AHM2003, Nottingham,
September 2003.
[7] L. Young, S. McGough, S. Newhouse, J.
Darlington. Scheduling within ICENI. UK eScience AHM2003, Sheffield, September 2003.
[8] http://public.kitware.com/VTK
[9] http://wwwunix.mcs.anl.gov/~jones/Chromium/flxspu.htm
[10] Futures Laboratory of Argonne National
Laboratory
[11] http://www.cs.wustl.edu/~schmidt/ACE.html
[12] Nicholas T. Karonis, Michael E. Papka, Justin
Binns, John Bresnahan, Joseph A. Insley, David
Jones, and Joseph M. Link. High-resolution
remote rendering of large datasets in a
collaborative environment
Download