Spring09-Tuvok-ImageVis3D-2pager

advertisement
Visualizing Large Scale Volumetric Data
J. Krüger1, T. Fogal1, B. Whitlock2
1
SCI Institute, University of Utah; 2Lawrence Livermore National Laboratory
Summary
Scientists from the SciDAC Visualization and Analytics Center for Enabling Technology (VACET) have
developed cutting-edge techniques for visualizing large volume data. Through an emphasis on real-time
performance, scientists are encouraged to explore their data and develop new theories. To achieve this
level of performance, the Tuvok volume rendering library leverages the immense computational resources
available on recent graphics processing units, and will be paired with more traditional parallel computing
resources. Ongoing work is underway to integrate these tools into production-quality general-purpose
visualization and analysis tools, such as the VisIt software package.
Large Scale Data
A common way to gain insight into
volumetric data is through a technique dubbed
“volume rendering”. Due to its ability to
efficiently communicate three dimensional
structures, volume rendering is applicable to a
wide variety of disciplines.
Due to its inherently three dimensional
nature, a volume renderer must examine a large
amount of data before displaying an image. As
simulation data grow in size, traditional volume
rendering applications are strained under the
increased workload. To combat this issue,
scientists from VACET have developed Tuvok,
a volume rendering library that harnesses the
immense
computational
capabilities
of
commodity graphics processing units (GPUs).
Using Tuvok, scientists can use a typical
workstation to visualize the 8 gigabyte
Richtmyer-Meshkov dataset pictured at right,
instead of a small supercomputer. With Tuvok’s
level of detail features enabled, the dataset can
be interacted with in real time, at rates
exceeding sixty hertz.
Solution
For many researchers, eight gigabytes is a
small dataset. Furthermore, additional capability
in visualization and analysis tools does not mean
a task using high-end supercomputing resources
can be transferred to the desktop. Rather it gives
the freedom to run larger scale, higher resolution
Figure 1. Tuvok using its “ClearView” rendering
mode. ClearView suits the focus & context
model, common in large-scale scientific
visualization, through an intuitive model of a lens
which can see ‘in’ to the dataset. In this data,
scientists can rapidly discern the internal behavior
of a Richtmyer-Meshkov simulation at the
interface, while simultaneously viewing the global
structure of the fluids.
studies and analyze the data to resolve higher
frequency errors in the data.
In order to support these high resolution
datasets, the VACET team is integrating Tuvok
into the production visualization and analysis
tool, “VisIt”, already in wide use among
VACET stakeholders. In this environment,
Tuvok will be able to take advantage of VisIt’s
advanced support for file I/O and data
processing.
interactivity using the full resolution of large
datasets.
Figure 2. Volume rendering of a fire simulation.
Researchers running large scale simulations are
generating correspondingly large datasets,
challenging volume visualization tools to render
interactively. GPU ray casting allows this dataset
to be rendered at a fully interactive sixty hertz.
To deal with the scale of the data generated
by collaborators such as APDEC, Tuvok will
work hand-in-hand with VisIt's rich support for
data parallelism, leveraging existing DOE
supercomputing resources. Part of this work
involves infrastructure updates to allow VisIt to
utilize multiple GPUs per compute node, an
increasingly
popular
architecture
for
visualization clusters. This will ensure that
VisIt's volume rendering system will scale as the
price/performance ratio of these “GPU clusters”
(clusters with one or more high powered GPUs
per node) becomes more attractive.
Tuvok uses bricking for simplicity in its
renderer, but this also serves as a convenient
performance metric. Initial timing results are
encouraging: Tuvok can render an average of
seven 2563 bricks in a third of second. On a
single
GPU,
this
means
the
full
2048x2048x1920 Richtmyer-Meshkov dataset
would render in about 23 seconds. On the
visualization and analysis GPU cluster Lens,
housed at the National Center for Computational
Sciences, there is potential for interactive
performance at full resolution, once the data are
loaded. CPU volume renderers have not been
known to achieve such a high level of
VACET is developing tools to ensure that
visualization and analysis does not become the
bottleneck in scientific discovery, even as
petascale datasets are generated and terascale
datasets become the norm. Simultaneously
harnessing the fast-increasing capabilities of
GPUs and traditional distributed memory
systems will enable researchers to interactively
volume render datasets like the aforementioned
Richtmyer-Meshkov instability at full resolution
on medium scale parallel computing resources.
On large systems, the combined parallel
computing resources developed in this work will
greatly accelerate volume rendering on datasets
which are currently far off from interactive
rendering.
Recent Publications
J. Krüger, K. Potter, R. MacLeod, C.R.
Johnson, “Unified Volume Format: A General
System For Efficient Handling Of Large
Volumetric Datasets.” Proceedings of The
International Conference on Computer Graphics
and Visualization (IADIS), 2008
J. Krüger, J. Schneider, R. Westermann,
“ClearView: An Interactive Context Preserving
Hotspot Visualization Technique.” IEEE
Transactions on Visualization and Computer
Graphics (Proceedings of IEEE Visualization
2006)
K. Bürger, J. Krüger, R. Westermann, “Direct
Volume Editing.” IEEE Transactions on
Visualization
and
Computer
Graphics
(Proceedings of IEEE Visualization 2008)
For further information on this subject contact:
Name: T. Fogal.
Organization: SCI Institute, University of Utah
Email: tfogal@sci.utah.edu
Phone: (801) 585-3950
Note: This work was funded in part by the SciDAC2 Visualization
and Analytics Center for Enabling Technologies and ASCR’s
Visualization Base Program by the Director, Office of Science,
Office of Advanced Scientific Computing Research, of the U.S.
Department of Energy under Contract No. DE-AC03-76SF0009
Download