VISUALIZATION: CHOOSING THE RIGHT TOOL FOR THE RIGHT JOB

advertisement
VISUALIZATION: CHOOSING THE RIGHT TOOL FOR THE RIGHT JOB
Helen Wright, Fotis Chatzinikos and James Osborne
Simulation and Visualization Research Group, Department of Computer Science, University of
Hull, HULL HU6 7RX, UK
Abstract: Visualization is nowadays an intrinsic part of scientific computation, with graphical
representation of simulation data giving the insight needed to assess results and postulate new models.
Interaction with the graphics output is an essential part of the visualization process, but whereas desktop
workstations with mouse and keyboard provide a commodity mechanism, more expensive items such as
wall-sized screens tend to be provided within central facilities and have specialised inputs using wand or
glove. An important skill for workers in any modern visualization laboratory is thus to know which
device is appropriate to the task in hand and what software drives it. Two projects undertaken at the
University of Hull can contribute: one investigates the potential for Grid toolkits to help manage
equipment diversity, by conferring “device-awareness” on standard visualization software components;
another harmonises visualization input and output mechanisms into a single, image interaction modality.
Together these will enable graphics interaction applications to migrate transparently across a variety of
equipment: users’ requirements will trigger autonomous selection of hardware resources and the
appropriate software to support their needs. In short, we will finally have a mechanism for that most
difficult task of all  choosing the right tool for the right job.
Keywords: Usability, Visualization, Grid, Image interaction
1 Introduction
The adage “use the right tool for the right job”
is just as fitting in the visualization laboratory as
it is in the workshop. Of course, the difficulty is
not using the right tool, but choosing it in the
first place. In this paper we describe some of
the problems that occur when trying to support
users of diverse visualization packages
deployed on a variety of graphics hardware, the
potential for grid tools to simplify this process,
and the introduction of new interaction
modalities that recognise different device
capabilities.
1.1 Hull Immersive Visualization
Environment
Our consideration of this fundamental usability
problem stems from the planning and
installation of the Hull Immersive Visualization
Environment (HIVE) [1] at the University of
Hull in the Department of Computer Science.
The HIVE has two wall-sized displays (one
front, the other back projected) with stereo
viewing, a hemisphere dome display, and
various workstations capable of rendering stereo
graphics and/or supporting a haptic feedback
device. In addition to these specific items, other
machines without special display capabilities
are used for prototyping visualizations prior to
using the specialised resources described. A 32node HPC cluster further increases the compute
resource available to users. Software provision
is equally diverse, with applications available
that
include
Modular
Visualization
Environments (MVEs, parts of which have been
extended by HIVE staff), virtual reality scene
definition toolkits and various ‘home-grown’
viewing packages. All of these are supported
by lower level software APIs that perform head
tracking, stereo rendering and haptic
manipulation.
1.2 Grid tools
At the same time as institutions are recognising
the value of pooling their visualization
resources within centres such as the HIVE, the
grid computing community is developing tools
that could help to utilise these resources. Grid
computing [2] is traditionally applied to grandchallenge problems; the problems we aim to
solve here are not grand challenges, but they are
complex. The toolkits [3] used to underpin grid
computing have a place in the work we describe
here, since they provide a means to support
resource discovery, job scheduling, certification
and security.
1.3 Types of client
The aim of the HIVE is to provide both a
research facility for visualization, virtual reality
and imaging sciences, and also to provide a
facility for other research within the University
and its region that uses these technologies. Two
types of user can thus be identified: the first is
the visualization engineer (or visioneer) who
will be familiar with most (though probably not
all) aspects of the HIVE facilities and can
produce visualization solutions to the detailed
specification provided by the second type, the
application domain expert.
2 The SuperVise Concept
The SuperVise system [4] has been conceived
to allow for both visioneers and domain experts
to make the most of time and resources, with an
initial focus on providing an interface to
visualization techniques. These may be in the
form of an IRIS Explorer map, an Open DX
network, or a script to drive a ‘home-grown’
visualizer in a particular way. The precise
solution developed depends on the visualization
engineer’s skills and the requirements of the
application, but its detailed nature is hidden
from the end-user. The role of the system
components is then to deliver this solution in a
transparent and flexible way on the available
(and evolving) hardware and software.
2.1 Illustrative scenario
Figure 1 shows a scenario where a user wishes
to visualize their data using an isosurface and a
slice, initially run from their own desktop
machine, Office 1. This has no special output
capabilities so the check boxes for Stereo,
Haptic and Head Tracked are greyed. The
visualization
techniques,
encapsulated
respectively as an IRIS Explorer map and an
Open DX network, are dispatched to available
network nodes and their geometry outputs are
combined and returned to Office 1.
Next, the user wishes to display the output in
stereo. Stereo capability is available in the
HIVE using node Centre 1, so when seated at
this machine the interface does offer the option
to use stereo, which the user checks. When the
geometry outputs are combined, left- and righteye views are generated and delivered to Centre
1.
Later the user wishes to discuss the
visualization with a group of people. The
HIVE’s stereo display wall provides headtracked stereo, so now the interface appears to
the user with the options to use stereo and head
tracking available and checked. Note, however,
that this user is not certified to use haptics, so
even though the display wall incorporates a
Phantom, in this instance the haptic check box
is greyed and the geometries returned will not
interface to this device.
2.2 Incorporating interaction
The scenario in 2.1 represents the current stateof-the-art in respect of our SuperVise prototype,
whereby presentational techniques, i.e. those
requiring little or no interaction by the user, can
be supported. Suppose however that the user
wants to adjust the position or orientation of the
slice; how they do this is inherently bound up
with the device they are sitting at. At a
workstation an IRIS Explorer user can employ a
mouse-driven transform generator (Figure 2)
that transmits a new plane normal and causes
the slice to be redrawn. On the display wall,
however, the viewer is configured to fill the
screen and interaction via this separate window
is no longer an option.
To solve this difficulty we will draw on work in
[5], which has devised a software architecture to
allow
image-based
interaction
with
computational steering and visualization
applications. Instead of treating the display
process as a purely output-oriented pipeline, the
architecture incorporates an additional, inputoriented pipeline. The TransformGen process
can now transmit Slice’s requirement for an
input vector using an InsertInteractor process,
which places geometry in the scene (Figure 3).
Dragging on the base or head of the vector
respectively translates or re-orientates the slice.
Other geometries in the image interaction
toolkit include a scalar interactor and a
positional interactor. The former could be used
to scale up the overall size of the slice and the
latter, if applied to its corners, could change its
aspect ratio.
3 Putting it all Together
The existing network node characteristics
repository of SuperVise is currently configured
to hold information about output capabilities,
such as installed viewer software, whether the
display supports haptic output or stereo
viewing, and whether the camera movement is
controlled by the user’s head position. To
incorporate the type of interaction shown here
will require the input modality of the device
also to be known. The SuperVise system
already selects visualization system scripts from
a techniques catalogue, in order to execute the
user’s chosen visualization – now, selecting a
specific version of a script will ensure the
interaction needs of the user are handled
similarly. Referring once again to the scenario,
using Office 1 and Centre 1 (each equipped with
mouse) would therefore cause a script to be
launched reflecting Figure 2, whereas using the
display wall would launch a script to execute
Figure 1 SuperVise scenario - visualizing a dataset using a variety of output modes
the same visualization but reflecting the
different input requirements of Figure 3
Combining inputs and outputs into a single
image also brings synchronisation benefits.
Problems of synchronisation during interaction
can be especially difficult to solve using grid
tools. Using image-based interaction places
knowledge about the state of the visualization
calculation within the renderer, since if the state
of the interactor is new, it follows that the state
of the visualization must be out-of-date. The
renderer can therefore disable the interactor
until the visualization is refreshed, thereby
preventing race conditions.
Figure 2 Conventional interaction with a slice tool using a separate window
Figure 3 Image-based interaction using a plane normal inserted in the scene
4 Summary
Combining SuperVise with image-based
interaction will produce a powerful tool for the
visualization laboratory. Users will be freed
from having to know about specific devices and
how to drive them, leaving them able to
concentrate on the job in hand – that is,
delivering the e-Science.
References
[1] http://www.hive.hull.ac.uk/ (2003); HIVE
(Hull Immersive Visualization Environment)
[2] Foster, I. & Kesselman, C. (1998), Morgan
Kaufmann Publishers, The Grid: Blueprint for a
New Computing Infrastructure
[3] Globus (2002), http://www.globus.org/
training/grids-and-globus-toolkit/IntroToGrids
AndGlobusToolkit.ppt, Introduction to Grid
Computing and the Globus Toolkit
[4] Osborne, J.A. & Wright, H. (2003), 5th
International Conference on Parallel Processing
and Applied Mathematics, PPAM 2003,
Czestochowa,
Poland,
September
7-10,
SuperVise: Using Grid Tools to Simplify
Visualization
[5] Chatzinikos, F. & Wright, H. (2003), in RF
Erbacher, PC Chen, JC Roberts, M Grölin and
K Börner (eds), SPIE: 5009, Visualization and
Data Analysis, pp. 455 – 462, Enabling Multipurpose Image Interaction in Modular
Visualization Environments
Download