GPU-Based Interactive Visualization of Billion-Point Cosmological Simulations Tamas Szalay,

advertisement
GPU-Based Interactive
Visualization of Billion-Point
Cosmological Simulations
Tamas Szalay,
Volker Springel,
Gerard Lemson
The Visualization Problem
• Getting better at storage and processing
– Distributed databases, clouds, etc…
– I/O needs to be only as fast as computation
• Doesn’t work for visualization
– Would need to read all the data every frame or have it
all in memory
– Even rendering itself would be prohibitive
• Could use pre-rendered movies
– Trial and error takes time
The Aquarius Simulations
•
•
•
•
A series of n-body dark matter simulations
Run from the early universe to today
Box roughly the size of galactic neighborhood
Run five times at different particle resolutions
– Lowest has 2.3 million, takes up about 25 GB total
– Highest has 4 billion, and takes up 20 TB
• Each version has point data in 128 ‘snapshots’,
with positions and velocities
• Movies have been rendered, took weeks
Visualization Motivation
• Certain types of analysis very difficult otherwise
– Qualitative impressions of gravitational structures
• Verification of simulation and accuracy of
structure finding
• Identification of events of interest
– Comparisons of multiple objects
– Two colliding gravitational clusters
– Dark matter streams
• Public outreach
Hierarchical Rendering
• Don’t need to render everything
– Saturates screen anyway
• So show the same data, but render less
– Create different levels-of-detail for entire dataset
– Load different parts from different levels as needed
• Put levels-of-detail on fast storage system
• And give it a rendering front-end
• Think Google Microsoft Maps
Level Structure
• Chose spatial octree because it is simple and general
• Each node also has associated data
– All of the data spatially contained within the cube
– Except simplified to <= N points
• Deeper in the tree means higher resolution
• Organized this way on disk
Selective Loading
• What resolution data to load in what spatial
location?
– Close to viewer in high detail and far away in low
detail
• Can use the on-screen size of the relevant
octree cube to determine resolution
– Means, in theory, visually equivalent to entire dataset
• Automatically scales to rendering hardware
• Can spread out through time as well
Rendering Front-End
• GPUs are fast
– Really fast
– Can do an unbelievable amount of computation in
rendering pipeline
– Allows tool to still do significant processing
• The actual rendering algorithm:
– Brightness represents the line integral of the squared
density in that pixel
– Color represents the temperature
– But there is quite a bit of computation involved
Practical Results
• Program currently runs on single desktop
computer and attached storage
– 4 GB RAM, GeForce 8800 GTS, 2x750 GB disk
• Smoothly interacts with and renders 1 TB
dataset (150 million points x 128 timesteps)
• Rarely loads or renders to full depth
– Could have arbitrarily large underlying data
Future Possibilities
• Storing and accessing data via databases
– Could even do some processing in between
• Distributed rendering
• Remote rendering
• Other datasets and data types
– Meshes, volume data, medical imaging
Download
Related flashcards

Computer graphics

26 cards

Image processing

22 cards

Shades of red

41 cards

Adobe Flash

31 cards

Free graphics software

50 cards

Create Flashcards