ITRproposal - Geophysical Mass Flow Group at UB

advertisement
ITR/AP+IM: Information Processing for Integrated Observation and Simulation Based Risk
Management of Geophysical Mass Flows
We propose here a program of IT research that integrates modeling, high performance computing,
data management, visualization, and collaborative environments for hazard mitigation of geophysical mass
flow (including pyroclastic flows and mud flows at volcanos). This research is application-driven and
multidisciplinary, requires significant advances in IT and includes an international team of researchers in
earth sciences, mathematics, computational sciences and engineering, visualization, distributed and
collaborative environments, geographical information systems and public safety planners.
1. Introduction
The risk of volcanic eruptions is a problem that public safety authorities throughout the world face several
times a year (Tilling, 1989). Volcanic activity can ruin vast areas of productive land, destroy structures, and
injure or kill the population of entire cities. The United States Geological Survey (USGS) reports that
globally there are approximately 50 volcanoes that erupt every year. In the 1980s, approximately 30,000
people were killed and almost a half million were forced from their homes due to volcanic activity. In 1991,
eruption of Pinatubo, Philipines, alone impacted over 1 million people. The United States ranks third in the
number of historically active volcanoes; some of the largest and most dangerous eruptions in the past century
occurred in the United States. The USGS is the lead agency studying volcanoes in the US, continuously
monitoring volcanoes in Alaska, the Hawaiian Islands, and the Cascade Mountains. Companion agencies
and institutions provide similar data in other countries. Predicting eruptions for restless and dormant
volcanoes remains a difficult challenge, one that we do not address here. Debris flows originating from
severe rainstorms threaten many areas throughout the United States, Mount Rainier being one principal risk
site (NRC 1994,Vallance 1997). These geophysical mass flows, whether dry or laden with water, are the
principal interest herein.
Hazardous activities consequent to volcanic eruptions range from passive gas emission and slow
effusion of lava, to explosions accompanied by development of a stratospheric plume with associated dense,
descending volcanic gravity currents (pyroclastic flows) of red-hot ash, rock and gas that race along the
surface away from the volcano. A volcanic gravity current from Mt. Pelee, Martinique (Lesser Antilles), in
1902, destroyed the town of St. Pierre and killed all but one of the 29,000 inhabitants. This was the largest
number of fatalities from a volcanic eruption in the twentieth century. Other mass flows of surfacial material
include rockfalls, debris flows, or avalanches. These flows often carry with them a significant quantity of
water. Debris flows associated with the eruption of Nevado del Ruiz, Colombia, in 1985 resulted in the death
of 26,000 people (NRC, 1991). Although scientists had developed a hazard map of the region, the people in
the devastated area were unaware of the zones of safety and danger. If they had known, many could have
saved themselves. A recent National Research Council report reviewing the Volcano Hazards Program of the
USGS contains a dramatic visualization of two scenarios of the aftereffects of an eruption at Mt. Rainier –
one with proper preparation based on simulation and modeling and one with no preparation (NRC, 2000).
Often after a disaster, homes, businesses and roads are re-built without regard for the future risk.
The decisions regarding public safety in hazard zones are made by a collection of agencies and government
representatives. These decisions are traditionally based on a combination of experience, heuristics, and
politics. Planning decisions for new construction around potential hazard sites are largely made in the same
way, with financial considerations often playing a major role. Thus, an effort to integrate and organize the
vast amounts of data from simulations and observations, and to disseminate it as meaningful information to
public safety planners, political leaders and scientific personnel, is of utmost importance.
To conceptualize the many facets of an ideal hazard evaluation system, consider the flow of data
involved. A video stream from a volcano observation station, along with records of seismic activity, ground
deformation and gas emission, might be sent to scientists on a regular basis. A topographical map of a hazard
zone, containing elevation measurements for every 10 m2 cell and extending some tens of kilometers from
the volcanic vent, is stored on computer disk. Geographic information systems data, including life-line and
support systems such as population centers, hospitals, roads, and electricity and water lines, also sit in a
1
database. When a triggering, premonitory event occurs, the topographical data are integrated into a flow
simulation whose 'initial configuration' depends on parameters deduced from the monitored data stream.
These simulations produce tens of
gigabytes of data each, the output
of which must be tied to the GIS
database. Finally, for humans to
comprehend the results of
simulations, all the output must be
rendered for visualization on
appropriate hardware. To unite
this diverse information, this
proposal presents a broadly based a) Topography data for simulations
e) Multi-resolution topography data for
research
plan,
integrating b) Requests for local finer resolution
visualization and appropriate land use
mathematical
modeling
and
topography data based on preliminary
data
simulations
f) Requests for additional auxiliary
scientific computing together with
information g) User appropriate Multiadvanced communication and c) Multi-resolution data sets from
simulation
resolution visualization
visualization, all within the d) Simulation parameters and other inputs h) Multi-media reports for public safety
context of geological sciences and
(e.g. real-time sensor inputs, model
planning public notification and
selection, etc.)
scientific analysis
geographic information systems.
This effort confronts several IT
Figure 1: Data and Information Flow for the Geo-Hazard Monitoring System
research challenges including:
 developing realistic three-dimensional models and simulations of geophysical mass flows;
 integrating information from several sources, including simulation results, remote sensing data,
and Geographic Information System (GIS) data;
 extracting, organizing and presenting information in a range of formats and fidelities, including audio,
visual, and text, to a geographically distributed set of scientists and decision-makers involved in risk
management.
Solutions to these research challenges will be developed, refined and used interactively at the several
participating organizations (University at Buffalo, Universidad Nacional Autónoma de Mexico, Centro
Nacional de Prevención de Desastres, and Universidad de Colima), to analyze, forecast, and display risk
scenarios for selected geographic sites. Participation of the three Mexican sites grows out of a longstanding
scientific collaboration among three participants (Sheridan, Bursik and Vallance) and several researchers at
UNAM, CENAPRED, Colima and Veracruz. Letters of commitment and support from our partner
institutions and the US Geological Survey are attached. Three Centers at the University at Buffalo will
participate in this program: the Center for Computational Research (CCR), which supports high-performance
computing; the New York State Center for Engineering Design and Industrial Innovation (NYSCEDII),
supporting visualization and collaborative work environments, and the National Center for Geographic
Information Analysis (NCGIA), a research center for basic and applied research in geographic information
science and spatial analysis. The computing and visualization hardware and software, and the scientific
expertise afforded by the collaboration of UB's centers, together with the international cooperation offered by
our Mexican colleagues, makes possible this multidisciplinary effort. The tools and software we develop
will be transferable to other research and development settings. Equally as important, the students we train
will mature in a rich environment of multi-national and interdisciplinary research. Our experiences will
provide a framework for establishing collaborations among a wide range of disciplines.
2. Developing Realistic Simulations (Patra, Pitman, Bursik, Sheridan, Vallance)
Although geophysical mass flows are dangerous, in many cases the loss of life may be reduced if
public safety officials and the general population are educated about the potential hazard effects on their
local environment. Flow models of the kind proposed here are useful for risk management, allowing
scientists to forecast the movement of volcanic materials on the earth’s surface (Sheridan, 1979, 1995, 1996).
Applications of such models include understanding geophysical mass-flow hazards, developing risk maps
and strategies for risk mitigation, and assisting in crisis analysis. We will develop a new generation of high
2
fidelity modeling and information systems that will be critical to the success of such analysis. The recent
availability of finer scale terrain models (up to 1 m resolution) as opposed to the older 30 m scale data
enables much higher fidelity analysis but raises the questions of better models of the physics and efficient
management of large scale data, computation and visualization.
Most currently available codes work on small areas with coarse grids, and are slow. We propose to
develop codes that input data sets which include the entire area of risk surrounding well-studied volcano and
mudflow sites (see Sheridan, 1996, Pitman, 1999) at sufficiently fine resolutions. The data will be a highdensity grid of topographic points (x, y, z data) at a horizontal spacing of a few to tens of meters, and a
vertical resolution at the order of 1-10 meters. The areas encompassed by a single network may be as large
as 50 km x 50 km. This terrain elevation data can be obtained by satellite or aerial imaging. Verification of
models and simulations will also be required (also see Tischer 1999, Woods 1998).
2.1 Basic modeling assumptions: The phenomena we will model center on rock avalanches, pyroclastic
flows, and debris flows, which are all concentrated granular flows often associated with significant quantities
of entrained fluid. These phenomena are some of the most dangerous types of geologic events (Tilling, 1989)
and have caused tens of thousands of deaths during the last century. Savage and Hutter (1989) proposed a
“shallow water” model of geophysical mass. They begin by considering an incompressible Mohr-Coulomb
material such that:
= g
(1a)
(u)/t + (u*u+T)
= 0
(1b)
u
where T is the stress tensor, and u is the velocity,  is the (constant) density and g is the acceleration due to
gravity. This model system is unconditionally ill-posed (Schaeffer 1986). Allowing for variation in density
mollifies the ill-posedness of the Mohr-Coulomb theory, but does not remove it entirely (Pitman and
Schaeffer, 1988, Schaeffer and Pitman, 1989). This class of models,( the Critical State Theory, Schofield and
Wroth 1968) is governed by a system of the form:
= 0
(2a)
 /t + (u)
= g
(2b)
(u)/t + (u*u+T)
Constitutive models relating T=T(, u) are needed to complete the picture. Other models of similar
structure, including those using visco-plastic models of mud flows have been derived (Davies 1986).
2.2 Multi-scale Modeling and Well Posedness: The physical processes activated in debris avalanches, or
mudflows are many and varied, and occur at several length and time scales (Costa, 1984); we concentrate our
attention on phenomena at scales on the order of one meter. Dry granular flows may contain particles
ranging in size from a few centimeters to a couple of meters. Interstitial fluid will affect these different
particles differently. We use the term interstitial fluid to represent the collective “mixture” comprised of
fluid and sufficiently small particles. Thus all dynamics below a certain length scale is lumped into the
“interstitial fluid”. We will investigate both effective material models, following the work of Iverson, and
full multi-material models. Because of the close coupling of the multiple scales of important physical present
in these flows, multi-scale modeling and computing is required in order to derive and accurately solve the
governing equations. Because of the numerical complexities of free surfaces and multiple materials, we will
use new, non-traditional computational methods, like solution adaptive hybrid grid-particle schemes.
Understanding segregation of fine and coarse particles is also a significant challenge;(e.g. Vallance 2000.)
The heat capacity of the soil and rock material that we consider here is large, permitting us to ignore
thermodynamic heat. Thus we do not attempt to model temperature-dependent lava flows. However
including visco-plastic materials permits us to examine an ‘isothermal’ lava system. Iverson (1997) extended
the Savage-Hutter model to include, at least partially, the effects of interstitial fluid (see Iverson & Denlinger
2000a for further elaboration). However the dynamics of all the debris and rock flows we consider
ultimately rely on granular temperature (a measure of local velocity fluctuations) and pore fluid pressures,
field variables that are difficult to model in a simple way. This makes detailed modeling of rheology
difficult. In spite of these very real challenges to our understanding of physics, experiments on dry and wet
granular flows indicate that inter-granular shear stresses are, more or less, proportional to inter-granular
normal stresses. Thus the Coulomb modeling assumption is generally a good approximation for these flows.
3
Models of incompressible dry Coulomb friction are dynamically ill-posed. To use Smooth Particle
Hydrodynamics-based methods (SPH), an equation of state, prescribing, for example, mean normal stress as
a function of density, is required. The Critical State Theory provides this kind of theoretical structure.
Although the Critical State Theory is also ill-posed, the mathematical ill-posedness is subtle. In preliminary
studies, the dissipation induced by an SPH scheme has been sufficient to overcome this ill-posedness.
2.3 Depth Averaged Models: Typical flows may extend up to several kilometers in length, but are only a
few meters deep. This observation motivates the depth averaging prevalent in most current models of such
flows. Flows run over (possibly) complicated terrain, and variation in elevation must be integrated into the
modeling and into computer simulations. The Savage-Hutter model of dry rock and soil debris flows relies
on the depth-to-length scaling together with a classical earth pressure modeling assumption, to simplify the
governing equations by ‘depth averaging’. A natural scaling based on this observation gives a small
parameter, H/L, where H is the typical height of the flow and L is a characteristic down-slope dimension;
the appropriate time scale is {L/g}1/2 and velocity is {gL}1/2. Resulting equations in the remaining dependent
variables h(x,t), the thickness of the flow at location x and time t, and the down-slope velocity u, are:
(3a)
h/t + (hu) = 0
(3b)
(hu)/t + (hu*u+h2) = h+hU(u)
Zero boundary conditions are often applied. The  hU(u) term incorporates effects of a variable terrain
prescribed by the basal elevation b=b(x,t) (Greve, 1994). This system of hyperbolic partial differential
equations with source terms, is more amenable to analysis than the original system. The process of depth
averaging regularizes the model equations. These “shallow grain” model equations can be solved
numerically, relying on techniques developed in CFD (Gray 2000, Pitman 1999, Iverson 2000b).
Depth averaging, however, ignores two important phenomena – a) erosion of the channel through
which material flows and, b) calculation of the free surface at the top. The rubbing action of a flow may lift
only a few cubic meters of material from the basal layer at any one location. However, because the flow is
so long, the cumulative effect is enormous. It is estimated that perhaps half of the final deposit of many
flows may be due to be eroded materials. Modeling such effects in the confines of a depth-averaged approach
is difficult. Both computational and modeling in issues arise at the free surface on top of any flow.
Computationally, free surface flows are challenging even for single material flows; multi-material flows are
more complex yet. Depth averaging sidesteps these difficulties but is unlikely to be a very good model for
the high resolution analysis that we will pursue. These difficulties motivate our pursuit of a full flow model.
2.4 High Resolution Full Mass Flow Models: In preliminary work, to cope with the availability of finer
resolution topography data (and hence larger simulations), we ported Sheridan's FLOW3D (which uses a
particle based approach to model pyroclastic flows) to a parallel computing environment. In this port, 30
meter and 10 meter triangular irregular network elevation data was used to describe terrain (Sheridan 1995)
(see Fig.1). At the same time, we explored the use of data sets at different resolutions in the computation and
visualization. This capability will be essential for efficient remote visualization – a key component of the
work proposed in subsequent sections.
It is natural, then, to begin our efforts by solving the "shallow material" model system over realistic
terrain, incorporating both Sheridan's work and the experience of Pitman and Tulyakov 1999 (and Gray
2000, Iverson 2000b). We will also test particle-fluid flow models and mud models, as mentioned above.
We will solve the governing PDEs both by finite difference/finite element methods and also by using SPH
(Monaghan, 1992, Cleary, 1999, Bursik, 1999, Dilts 1999, Hoover 2001). Our use of the term ‘SPH’ includes
other related methods, like reproducing particles, partition of unity, SPAM, and moving least squares. This
first step allows us to further develop and test our visualization capabilities. We believe that the SPH class of
models will be particularly attractive for flow modeling in a parallel computing environment. Next we will
solve a full model system of dry flow by SPH methods. The smooth particles in an SPH simulation require
the same kind of constitutive information as is available in a full model. Thus, substituting different material
models (e.g. dry granular flows, visco-plastic mudflows) is relatively direct. Although the computational
details have not yet been fully resolved, it is natural to consider an SPH scheme for full multi-material flows.
4
2.5 Hybrid Mesh-Particle Schemes: To better account for the physics we propose a scheme that combines
SPH and adaptive higher order discontinuous Galerkin grid based schemes (Cockburn 1989, 1990, Devine,
1996, Bey, Oden and Patra, 1996, Oden, 1998, Flaherty, 2000). The granular material will continue to be
modeled by SPH particles. Interstitial fluid will be modeled by a continuum, and computed by DG finite
elements. A “projection” of granular particle properties to the fluid will transfer momentum between phases,
similar to the momentum transfer of the immersed boundary method (see Pitman 2000, Pitman, Nichita
2000, Maxey 1997). The adaptive discontinuous Galerkin schemes, first devised for hyperbolic conservation
laws, provide opportunities for very high spatial accuracy (Bey, 1996). Adaptive Galerkin schemes provide
O (hp) numerical accuracy, where h is the grid size and p is the local polynomial order of the approximation.
These schemes give the flexibility of localized high-resolution grids without the significant overhead of
classical finite element schemes , which require continuity of solution fields. By combining SPH and DG
schemes, we obtain the best features of both Lagrangian and Eulerian computing methodologies.
As DG finite elements are integrated into computations, issues of accuracy arise. The computational
grid on which we work is a patchwork of elevation measurements obtained by satellite, shuttle, or airplane
measurements, arranged into a TIN – triangular irregular network. The technology for obtaining standard
data provides locations approximately every 30 meters, with about 5-meter elevation accuracy. More recent
measurements are at 10 meters, with about 1 meter vertical accuracy. However for specific, limited locales,
even better accuracy – about 3 meter blocks with elevation good to about 0.5 meter – can be obtained from
GIS data. We propose integrating this GIS data, where available, to locally refine our best available TIN
and adaptively solve the fluid subsystem. Thus our simulations will use solution adaptivity to control both
numerical and modeling errors. This requires real-time interaction between GIS and the simulation, whereby
the simulation code will query the GIS for additional information on elevation data as it refines the TIN.
2.6 Data Management, Load Balancing and Geometric Indexing: To develop realistic simulation
methods, we must exploit solution adaptivity and GIS data to control numerical and modeling errors. To this
end, the root kernel of our simulation codes will be a new data management scheme that facilitates storage of
and access to massive data sets, as well as data obtained from simulations. Our simulations will permit local
mesh refinement, and thus includes support for locally refined analysis in areas of special interest. The
organization of GIS data that provides input to the simulation will also be facilitated by this structure. The
key to this management scheme is the use of indexing schemes and distributed dynamic data structures that
support parallel adaptivity. The data structures must be designed to allow fast access to selected subsets of
data, as well as fast traversal of the complete data set in distributed memory environments. Hence, data from
simulations must be easily integrated and correlated with the data from field observation and remote sensing.
Geometry-based index schemes will allow this integration. We have developed such indexing
schemes for parallel adaptive finite element simulations (Patra 1999,2000) using space filling curve based
schemes to order the elements and nodes (see http://www.eng.buffalo.edu/eng/mae/acm2e).
Such locality preserving orderings also have interesting
additional benefits in terms of solution efficiencies for cache
based microprocessors (Long and Patra 2001), hierarchical
structuring of data etc. (Abel and Mark, 1990). Because of the
logically non-local nature of typical particle and particlemesh computational data structures, incorporation of these
algorithms into our management scheme will require separate
indexing for logically-connected and physically-connected
regions. We will also have to account for the dynamic nature
of the computations as the flow evolves creating and
Figure 2. FLOW3D simulation of mud flows at
destroying computational objects as required.
Popocatepetl Volcano, Sheridan (1996).
In distributed memory environments such computations involve additional burdens of dynamic load
balancing and data migration. While, these are technically challenging problems, our past work (Laszloffy,
Long and Patra 2000, Patra, Laszloffy and Long 2001) developing schemes for efficient and load balanced
parallel adaptive finite element computations should readily extend to this framework. Another principal
challenge will be the development of appropriate indicators of solution quality – to guide the grid and model
5
adaptivity. The DG schemes will encompass flux calculations that can be suitably postprocessed to yield low
cost and robust indicators of numerical error. Finally, for high fidelity simulations good rheology models
must be integrated into combined SPH/DG algorithms for simulations using appropriate field observations
and assessments of the quality of the simulation to select appropriate models.
IT Research Challenges: Developing computational methods that are fast and scalable, which
accurately represent the complex multi-material and multi-scale physics of geophysical mass flow problems,
and the concomitant development of data management tools for enabling simulation, are significant
challenges.
3. Development of Interactive Immersive Visualization (Kesavadas, Patra, Bloebaum)
The simulation and interpretation of events at multiple scales, incorporation of GIS data, and fidelity
of all data, requires not only high performance computing but also a visualization system capable of handling
hundreds of gigabytes of data. We will design a visualization environment to allow interactive navigation
and manipulation of view, and to include a spectrum of visual detail ranging from individual buildings to
entire mountainside scenes. The inherent complexity of navigating and visualizing across a range of scales
demands a sensible hierarchical organization of data, and a multi-resolution paradigm for display.
Multiresolution models are developed for representing, analyzing, and visualizing models at different
levels of detail. They employ techniques for data reduction and decimation (Pagillo 2000, Renze 1996 and
Schroeder, 1992), which makes real-time simulation and visualization possible. Moreover, in cases such as
this, a hierarchical organization allows better interaction with the models. Research in multiresolution models
has primarily been devoted to the representation of natural terrain, in the context of Geographical
Information Systems (GISs). Multiresolution terrain models can be classified into two main classes, namely
Hierarchical Terrain Models (HTMs) and Pyramidal Terrain Models (PTMs)( De Floriani 1989, De Berg and
Dobrindt 1995). Based on topology of subdivision, digital terrain models can be classified into Regular
Square Grids (RSGs) and Triangular Irregular Networks (TINs). In our proposal TIN terrain models are used
to model the Volcano sites. Due to the inherent regularity of the grid structure, RSGs become non-adaptive to
terrain features whereas TINS naturally adapt to topography: relevant lines and points representing
topographic features (such as peaks, pits, passes, ridges and ravines) may be included as edges and vertices
of a triangulation. There are two main approaches to the construction of an approximated TIN; top down
strategy such as Delaunay-based TIN (Fowler and Little 1979) or bottom-up strategy (Lee 1991). According
to refinement criterion, existing and TIN can be classified into Quadtree-based models (Gomez and Guzman
1979, Barrera and Vazquez 1984, Fekete and Davis 1984) and Hierarchical triangulated models (De Floriani
et al. 1984, Scarlatos and Pavlidis 1992, De Floriani and Puppo 1992, De Floriani and Puppo 1995, Dyn et
al. 1990, Rippa 1990, Rippa 1992, Quak and Schumaker 1990, Schumaker 1993 a,b).
A pyramidal terrain model is a sequence of DTMs, each of which is defined over the whole domain,
and is obtained by refining a subdivision of the domain at increasingly finer resolutions. Another effective
technique for terrain generation is by texture mapping of images to the underlying polygons. Performance
can be improved by directly mapping aerial photograph to low resolution terrain polygons thus improving the
rendering speed of large database. Reliable and efficient way modeling texture maps, mipmaps and level of
detail(LOD) based on the location of the database is a relevant area of research (DeFloriani, 1999, Hoppe,
1998). Fractal based terrain generation of color based models can also be successfully generated for
relatively low-resolution models.
Some of the relevant issues we are concerned with in this proposal (i) Type of data models: that is
the type and structure of the data that is output from the simulation and the abstraction that is used to support
the rapid manipulation of data by the visualization environment. (ii) Hierarchical and multilevel models: In
large-scale simulations, the sheer size of the grid is a challenge for real time visualization. Researchers over
the last few years have looked in to determining the best way to apply hierarchical and multilevel methods to
unstructured grids. Effective file formats are needed to represent meta-data and the time-dependent metadata. Some parts of the simulation data may also require higher resolution rendering, requiring corresponding
multi-resolution visualization techniques to provide increased visual detail in specific areas. Multi-resolution
techniques may also enable the user to study an overview of a large dataset, and then "dive" into high-
6
resolution data for a particular area of interest. (iii) Feature extraction: Another very important area of
research looks into datasets that may require hours to visually scan for features. Automatic feature extraction
can be coupled with hierarchical techniques to produce indexes and atlases capturing the salient features of a
large dataset in an overview mode, and these overviews can then be linked interactively to higher-resolution
forms of the data. (iv) Computational and measurement errors and uncertainty in 3D datasets:
The ultimate goal of uncertainty
visualization is to provide users with
visualizations that incorporate and
reflect uncertainty information to aid
in data analysis and decision making.
We therefore propose a scalable IT
visualization which makes it feasible
to transmit real time visualization
being computed by the high end
computers such as SGI ONYX 2
directly to any desktop workstation
Figure 2. Scalable visualization framework
connected to a standard networks.
Using new techniques based on OpenGL Vizserver, the server information from the visualization
frame-buffer will be sent as compressed packets to client machines making it possible to visualize large
gigabyte data over the Internet. Fig. 2 shows a proposed architecture for the scalable IT visualization, in
which one or two sites with advanced visualization computers (e.g. CCR, NYSCEDII) will carry out the
simulation, while the client computers located at the hazard sites will aid local officials in monitoring the
simulation in real-time using nothing more than an SGI octane or a Linux based PC. At the lowest level, the
webserver will stream key frames from the frame buffer (low resolution) to public websites.
3.1 Feature Extraction and Meta-structure: In addition to the sheer size and range-of-scales presented by
simulation and GIS data that need to be visualized, the extraction of important features presents a significant
challenge to visual environments. Again, we propose building on our hierarchical techniques to produce
intelligent indexes and atlases that capture salient features of the hazard site (Kesavadas and Subramaniam,
1999). These overviews will then be linked interactively to higher-resolution forms of the data. Uncertainty
in image data means that we must provide users with a measurement of confidence in the information used
for data analysis and decision making. This measurement itself may be scale-dependent and dynamically
changing. Because visualization is crucial for risk analysis of hazard settings, we must design features to
allow creation of meta-structures of data that can be extracted for detailed, off-line analysis.
Our approach to these problems consists of a combination of three broad stages: (a) creating linked data
out of the simulation and GIS structures, (b) adapting a tessellation algorithm to create connectivity and (c)
developing a dynamic decimation strategy to allow quick refinement of the large data set (d) developing
Vizserver architecture for sharing real time visualization over the network. Our first objective is to store
data in a form which can be used for visualization, then decimate it to optimum size for rendering quickly
and without undue processor and communication overhead. Our second objective is to allow dynamic redecimation based on user interaction and manipulation.
3.2 New interaction Virtual Intelligent attributes: Our interaction with the large model visualization will be
through instrumented gloves and the wand. The glove is a more intuitive way of interacting with the large
volcano models, while wand is natural interface for navigating and interacting with the menu systems. In our
visualization environment, the user will be able to reach into the scene, pick points, regions or even scientific
variables to study the simulation data. To achieve this we propose to develop new interaction methodology
demonstrated by Kesavadas called virtual tools with intelligent attributes (Kesavadas and Subramaniam,
2001). These virtual tools will have several routine characteristics build into it, thus by grabbing an
appropriate tool, the observer will be able to achieve manual navigation or data changes much more easily
and intuitively. For example, by using a "virtual binoculars", the user will zoom in to a certain part of the
Volcano site, thus allowing high-resolution data rendering only in the view/zoom level of the view port. The
user in principle will not have to understand how he or she will extract that information, because the
7
intelligent tool will know the exact data rendering procedure. Once a region is identified, the designer will
grab a "Virtual pen or a brush" with a desired end point, using the instrumented glove, and use this to circle
or annotate the large terrain. Now since the end point and other attributes of the tool is know, we will use this
to calculated precise information by intersecting with the meshed models to reflect the physical properties
and variables of the computational models. A wide range of scientific visualization and interaction activities
is proposed using such schemes. Other such tools could be used for locally measuring (virtual gauge) the
flow of lava, smokes etc giving the scientists an advanced tool to interact with the complex Volcano model.
IT Research Challenges: Developing a range of scalable multi-resolution visualization techniques for fast,
accurate, user friendly interactive visualization of large data sets from simulation and GIS at different levels
of fidelity for a geographically distributed set of users.
4. Integration of GIS and Simulation Data (Mark, Patra, Kesavadas)
The last decade of the 20th century witnessed the development of several forms of computerized
models for hazards associated with volcanic or mudflow events. Unfortunately, these models are not
interactively linked to visualization systems nor to geographic databases. This fact limits their use by
political decision-makers for risk mitigation. Computation, communication, and information technologies
during this period advanced at a significantly faster rate than the development, testing, and utilization of
controlled scientific models. In general, posters, still images, or video scenes of hazard events are the main
methods used to explain these phenomena to the public safety officials and to illustrate potential crisis events
to the local inhabitants. Only in a few cases were advanced technologies or computer models used in the
development of volcanic hazard maps. It is safe to forecast that in the first decade of the 21st century the
technology for realistic portrayal of simulated hazard phenomena will make immense advances. In
particular, the opportunity exists to illustrate potential hazards with realistic scenes at different scales, using
images from actual sites mapped onto a 3-D framework with scenes that are easily recognized by the viewer.
Computer simulation of mass flow phenomena, coupled to integrated GIS data, could allow analysis
of infrastructure, safety, and risk in a manner that is not possible today. Flow models are useful to forecast
the movement of soil, rock and fluid on the surface. Not only can GI data provide high resolution
topographical data, but GIS data can also provide vital information on population centers, infrastructure, and
access to and from threatened areas. Integration of simulation results with GIS data will provide pre-crisis
understanding of hazards, assist in developing risk maps, provide real-time crisis assistance, and help in postcrisis reconstruction and distribution of aid. Visualization of all this information is necessary for analysis.
A novel feature of the work proposed here is a direct coupling of simulation and GI data. Our simulations will be
solution adaptive, — that is the simulations will use measures of solution quality to improve approximations locally by
using, for example, finer grids or more SPH type particles. When solution refinement is required, a query of the GIS for
additional information on topography will produce simulations that have both minimal numerical error and minimal
modeling errors. Our use of geometry based indexing for simulation data will permit this coupling to be performed with
ease.
4.1 GIS Based Simulations: One of the major GIScience research issue we will face is how to get
topographic data from the GIS into the simulation. Commercial GIS software has not been pushed regarding
retrieval speed and efficiency. Also, hierarchical or other variable-resolution DEM data are not available in
the commercial products. The internal data organization of the simulation will use geometry based indexing,
and we will need to have high-resolution topographic data only in the area of detailed simulation
calculations, with more sparsely-spaced data in other areas. For simulations based on regular grid DEMs, it
seems likely that it will be most efficient to store the elevation data in a file that is indexed hierarchically,
rather than in the proprietary GIS. Mark, Lauzon, and Cebrian (1989) explored quadtree-based hierarchical
indexing schemes, and found that sequencing the elevation grids according to disk addresses obtained by
interleaving the x- and y-coordinate bits (so-called Morton ordering) made it easy to write quadtree-driven
code to retrieve dense grid data only where needed. In a related effort, Abel and Mark (1990) explored a
number of related hierarchical orderings for spatial data processing, but did not include DEM indexing as one
of the bases for comparison. The simulation model in this project is adaptive i.e. it will regrid with finer
resolution, or add more SPH type particles in areas where the solution is inaccurate. Such regridding without
8
finer resolution topographic data will not be very useful, and so we will develop mechanisms to allow the
simulation to query the DEM data for additional information on finer resolution topographical data quickly
and only in areas where needed. We will to develop methods for indexing and rapid retrieval as triggered by
the sensitivity analysis for the simulation model, and test them across some indexing strategies that are
discussed in the above articles.
4.2 GIS Based Visualization: In contrast to the situation of rapid redensification of grid DEM data, which is
best solved outside the GIS, we will rely on commercial GIS products ArcView GIS and ArcInfo to integrate
simulation results with land-use, land cover, population data, infrastructure, and other GIS information. We
then will use the GIS to drape the integrated two-dimensional geospatial data over the DEM to produce 3D
visualizations of the simulation output in its proper geographic context.
IT Research Challenges: Developing techniques and interfaces to couple topographic and land use data into
the adaptive simulation and visualization framework.
5. Geographically Distributed Visual Environment for Collaboration (Bloebaum, Winer)
Because many who will contribute to the risk management decision-making process will be in
geographically remote locations, it is critical to develop the capability for them to interact with one another,
view simulation results, and discuss possible safety options in a distributed fashion. In addition, there are
several levels of communication and dissemination that must be captured (i.e. for the scientists doing the
simulations to the public safety officials). The information will come from various sources, such as sensors
at the remote hazard site (cameras, etc.), GIS data, and immersive simulations. Further, the users themselves
will provide yet another kind of information. Using a web-based interface, users will be able to send
information to one another, view the information and images, and discuss the data and images – all in realtime. This information will be instantly posted to a common system to be viewed by all others.
There are several collaboratories in existence today, all of which serve a specific purpose, including
the Distributed Collaboratory Experiment Environments Program (DCEE) [Johnston 1997], the
Collaboratory for Research on Electronic Work (CREW) [Bos 2001]. Other work in the collaboratory area
includes the development of CPU sharing across the Internet for the purpose of solving massive equations
[Foster 1999], the development of protocols for sharing scientific instrumentations [Shaikh 1999], as well as
the coupling of high-end computing and imaging equipment for sharing 3D visualizations of complex
structures over the Internet [Young 1996]. Other areas where collaboratories are being employed include
health care [Patel 2000] and concurrent engineering [Jagannathan 1999].
5.1 Collaborative Toolset Development: Once data have been collected from sensors, satellite images, or
simulations, users must then have the ability to share the results immediately with others to allow for
improved decision-making in issues such as further data collection or risk mitigation planning. The
developed tools will include real-time video and audio conferencing, a “whiteboard” for sketching out
diagrams, and an annotated log file detailing each step of the interaction. An underlying theme is that
pertinent information about the volcano and associated simulations will be available in concise format for
archiving and submission to appropriate databases.
The collaborative capabilities are composed of a set of services that each user can customize upon
logging in to the system. These services will reside on a dedicated application server that will distribute
resulting output to each user. An important capability will be the use of “smart” agents to determine the
appropriate fidelity of collaborative tools each user should use. These agents will be in the form of Java
programs and server side scripts written in PHP (http://www.php.net) and XML to collect the desired
information. These agents will become active when a scientist logs in. They will bring information back to
the application server, including speed of network connection, type of scientist’s graphics hardware, and
processing speed of scientist’s workstation. Then, appropriate tools will be suggested by the agents to ensure
real-time interaction. A user may choose these selections or a set of alternatives. Thus, the infrastructure will
not control the information or how it is accessed and viewed, but rather will present options to achieve fast,
intuitive collaboration. The collaborative toolset will include the following features:
9
• Multi-view presentation control service
• Video and audio conferencing
• Data transformation tools for detailed graphical
presentation and analysis
• Virtual whiteboard for conceptual idea exchange
• Dynamic audio to text translation
• Multi-track annotation using audio, video, text,
(MPEG 3 & 4)
5.2 Collaborative Infrastructure Development: The proposed infrastructure must incorporate several
different technologies, software tools, and data types into a single infrastructure. This infrastructure must be
available on any computer platform and adaptable to the level of client hardware as well as to the network
connection speed available. Further, this environment must be easy to use and understand. A user’s
information should be enhanced, not hindered, by the interface. The investigators at the University at Buffalo
have already had extensive experience in developing collaborative infrastructures with the development of
the Geographic Independent Virtual Environment (GIVE) [Winer and Bloebaum,1999]. The goal in
developing GIVE is to allow individuals in any geographic location in the world to work concurrently on a
design or experimental process in real-time. The users can communicate their ideas to one another via words,
equations, graphics, and animations. The present GIVE environment (Figure 4) is web-based and viewable
with any web browser currently available (Netscape Communicator, Internet Explorer, etc.). The browser
must be Java compliant and must also support frames. The control frame located in the lower right corner is
used to place different files and information in the other frames and also gives the users the ability to upload
and download files. The user can load different files into adjacent visual frames to display various file
formats. The lower left portion of Figure 4 is the messaging tool, the top portion is the visualization area of
GIVE, and the user chooses how many frames to view when entering GIVE. The visualization windows
currently display VRML, GIF, JPEG, ASCII, PDF, MPEG, Quicktime, and AVI file formats.
The final front-end feature is the equation
interface, which can be launched from the control
frame. The equation interface is a Java applet allowing
the scientist to construct an expression to
communicate via mathematical expressions and
nomenclature. GIVE allows real-time, platform
independent interaction for any scientist or group of
scientists or other users. While GIVE is a solid
foundation upon which to build, adaptations must be
made to enable access to remote data and to better
enable communication. Additional development must
be performed to interface successfully with the other
concepts and software tools proposed here.
Figure 4. Sample system for collaboration.
The collaborative environment can be
visualized as a set of information
technology services including visual
and
audio
presentation,
data
transformation, data storage, data
retrieval, and technologies that allow
the integration and control of all
services available in the environment.
The architecture model that will be
used is comprised of four major
components: 1) Data abstraction
component, 2) Operation and
transport component, 3) Collaborative
toolset (Presentation component), and
4) Digital connectors. See Figure 5
for how these interact.
Figure 5. Diagram of networking and accessibility of the collaborative
toolset from a single interface.
10
The Data abstraction component is composed of a set of services and tools related to data
manipulation, storage and retrieval. Concepts such as “digital asset management”, “knowledge management”
and “digital rights management” and techniques like “digital watermarking”, “cryptography”, “federated
searches” across “information and data portals”, “queries by image content” and “digital content distribution
controls” will be part of the collaborative infrastructure and will be implemented as Data Services. The
operation and transport component is composed of a set of services and tools related to message transport
across services as well as control of such services and is subdivided into building blocks such as:
• Participant’s roles, responsibilities and access
• Digital asset management and content management
levels
services
• Web-services, data views and access control
• Secure transmission of sensitive data
• Dynamic directory services for all applications
• Logging and tracking of events
• Application server development
• Digital rights management client components
• Synchronous and asynchronous messaging
• Access control and player.
services
• Security and cryptography
• User authorization
• Watermarking and rights management
These building blocks will effectively provide features such as secure transactions from the
collaborative infrastructure to each user to protect potentially sensitive data. This will include more than
simple data encryption, but will require the use of constant user authentication, digital signatures, and secure
keys. Digital connectors will be used to interconnect the data services to the operation and transport services.
These digital connectors will be objects that inherit the interface descriptions of the connecting services and
provide some logic to the metadata transformation that will occur between the connecting services. XML
will be used in the metadata description of the transitioning/transported objects. Currently, digital connectors
need to be uniquely developed. Thus, the required methodology, technology, and services to allow the
creation of dynamic digital connectors will be developed.
5.3 Data Storage Access Method Development via Infrastructure: This collaborative infrastructure will
connect all our major sites (UB, CENAPRD, UNAM, USGS). Rather than attempt to centralize all data in
one place under a common format, the researchers propose to enable storage and access to data from
different sources under a single interface. Access to all these data will be transparent to a scientist using the
collaborative infrastructure. The infrastructure will have in place middleware tools to accommodate the
various types of data and methods for accessing them. This will be accomplished via a methodology
involving relational databases layered upon remote data, with dynamic digital connectors to retrieve and
display the desired data. All pertinent data are stored in binary files. A relational database is created which
contains pertinent descriptions about the data. Scripts gather user input data, then query the relational
database via SQL (Structured Query Language) to determine what data are then sent back to the user. An
appropriate digital controller is selected to retrieve and display results. If a suitable digital controller is not
available, then one is created dynamically to take into account the desired protocol, user’s network
connection, and present form of data to be accessed. Further, the relational database does not have to be
collocated with the actual data files, thus enabling easy remote data access. The queries are constructed from
simple controls (checkboxes, text searches, radio buttons). These database interactions are controlled by a
single server environment to ensure commonality of results to all users. The data are also in a truly platformindependent form, as no translation amongst systems is occurring. All original data remain on the original
systems on which they were created and/or stored. By developing a robust infrastructure, future remote data
sets will be automatically integrated with a few short steps directly through the collaborative infrastructure.
IT Research Challenges: The true challenge in this section is to develop an infrastructure to
effectively integrate heterogeneous networks and clients to function as a seamless system. A further
challenge is to develop methodologies for dynamic digital controller creation. This will automate integration
of additional databases, servers, and clients.
6. Software Integration (Jones, Patra, Pitman)
This proposal brings together data from several disciplines, integrating GIS and computer simulation
of mass flows for the purposes of risk and safety assessment. The requirements for this project include rapid
11
evaluation and visualization of data from a large repository of detailed numerical studies of volcanic flows
based on geographical and remote sensing data, as well as continued refinement of the simulation model
itself. These varied requirements impose a tremendous burden on the organization of the data; for example, a
low-risk unpopulated region may not be significant to the GIS data, but a detailed flow study might reveal a
significant impact on a more populated region further downstream. We propose to unify data resulting from
computer simulation of geophysical mass flows, GIS, and applied hazard assessment. The resulting fusion of
data will be complex and highly correlated, and must be portable (the ability to support multiple computer
architectures), scalable and support efficient search mechanisms. In addition, geographically distributed users
must be able to access the critical components of the data that are of particular interest to them, whether they
are in the field with a portable digital assistant, or in a hazard control center on a network backbone. To deal
with the varied requirements, we intend to use not only a hierarchical data storage (HDS) scheme, but also a
hierarchical storage management (HSM) approach. We propose to use existing technology for HDS, and
build a customized HSM infrastructure to better satisfy the varied requirements here.
We propose to leverage existing hierarchical data storage utilities to better manage the data
requirements of this project. Given the significance of the various data subsets, it is sensible to build into the
data storage a set of meta-data identifiers that can class (and even to some extent prioritize) the data itself.
There are several publicly available and widely used libraries, including NCSA's HDF5 and Unidata's
netCDF . Both of these libraries allow the creation of omplex meta-data-based datasets, with the crucial
feature that all such data files will also be machine independent. Both netCDF and HDF5 are designed to
support complex scientific datasets with an emphasis on portability and self-description (indeed, netCDF and
HDF5 are to a certain extent inter-operable). Direct access to selected critical subsets is also important (and
supported by both libraries) for efficiency when reaching into a large dataset and extracting the pieces of
interest, whether for visualization or computational refinement.
The format in which data is stored can have a strong influence on its utility. Vendor-specific data
files can be a bane to collaboration efforts, as seldom will all of the scientists (or public decision makers, for
that matter) have access to machines with the same architecture. Centralized storage is not the answer, as
network connectivity bottlenecks (particularly for large amounts of data) will arise. We intend to develop
and utilize fully portable flow and hazard data that can be co-located for easy (and fast) access by all
participating scientists and decision-makers. Thus, the machine-independent features of netCDF and HDF5
are key to maintaining the usability of our flow models and the data produced by them. The ability to co-locate key data allows for distant (possibly remote) sites to take advantage of faster access to local IT
resources, rather than depend on (unreliable) networks access to a central data repository.
This project has a wide spectrum of data access requirements. The simulation and modeling stage
will require access to fast, high-performance storage directly attached (or accessible through a high
performance network fabric) to the computers running the flow software. Similarly, visualization or
computational steering of the simulations will also require such high performance storage. But what about
risk assessors, possibly located far from supercomputers and the grid backbone of high performance
networks? These assessors clearly need access to enough of the hazard assessment data to make critical
decisions, and our proposed software will build in, through HDS constructs like those above, methodology
for : 1) distributed risk datasets for easy access from remote regions, 2) adaptive software for better data
connectivity management [e.g., low bandwidth = low detail or small area], and 3) feedback from remote
sensing and field operations to real-time simulations.
7. Education and Outreach (Sheridan, Bursik, Pitman, Patra)
Discipline-based scientists are often poorly trained in information technology, even as it becomes a
mainstay of contemporary research. This proposal, based on mathematical modeling, scientific computing,
and visualization integrated with geology and geography, offers a new approach to risk management. In
particular, it will alter the working environment of public safety officials. To achieve our goals, fundamental
IT issues must be addressed. The individual scientists at UB have histories of research and teaching
connections (see http://www.vrlab.buffalo.edu/visual , http://www.eng.buffalo.edu/~mfs, for preliminary
work on volcanic hazards, http://www.ccr.buffalo.edu/computational-sci/certificate.htm describing our
certificate program in computational science, and http://www.geog.buffalo.edu/igis for a description of an
12
IGERT program in GIS). Through CCR and NYSCEDII, we have also developed a summer workshop
program
in
computing
and
visualization
for
high
school
students
(see
http://www.ccr.buffalo.edu/education.htm). Past workshops have focused on computational chemistry; geoscience will be a future topic of this summer program.
Traditionally, the geo-sciences are not as quantitative as engineering disciplines, and high performance
computing is only slowly entering mainstream geology and geography. One challenge now is to forge a large
group education and research effort, one that brings IT and computational science into the workday of geoscientists, and geo-science into mathematics and engineering. Our cooperative experience in graduate
education through the CCR, designing a new certificate program in computational sciences, should prove
valuable. The NCGIA has developed an IGERT program, bringing earth science students together with
engineers, in an effective interdisciplinary setting. For the risk management effort proposed here, we will
develop instruction material that will serve as a model for other IT-geo-science alliances, building on our
CCR and NCGIA experiences. This material will include a) courses on scientific modeling and data
management, b) workshops and tutorials at conferences, c) a regular brown-bag seminar series for
participating faculty and students. The seminar will facilitate joint supervision of graduate students in
interdisciplinary research. [Remark: Sheridan, Bursik and Pitman have each participated in supervising the
research work of graduate students in the NCGIA program.] Fellows in the environmental modeling track of
Buffalo's IGERT program in GIS (Mark 1998) will also be invited to participate in this project, for the
benefit of both this project and to the IGERT program.
We will also engage in a program of public education on earth sciences and computational sciences.
Sheridan has an extensive history of such activity in the context of volcano hazards, having developed or
helped develop programming for the Public Broadcasting Services (NOVA series), BBC and Canadian TV
(Discovery Canada) http://www.eng.buffalo.edu/~mfs. Bursik has also engaged in the development of the
Digital Library for Earth Sciences. These activities will be continued under the aegis of this program.
8. Personnel and Project Management:
The large scope of this project, intellectual diversity of the investigators will require careful planning and
execution. To this end we have defined a clear list of tasks and assigned responsibilities of each of the
investigators. We list here principal assignments and a timeline for major activities.
Year 1:
 Developing Realistic Simulation (Patra, Pitman, Bursik, Sheridan, Vallance, Jones): This activity will be
a principal focus of the first year. The primary goals of this activity will be to develop the proposed new
hybrid particle-grid algorithmic methodologies and to integrate existing codes and new schemes into a
comprehensive simulation framework.
 Integrating GIS and Simulation (Mark, Patra, Jones): In the first year the primary task will be
developing techniques for integrating the topographic information from the GIS into the simulation at
multiple resolutions. Appropriate interfaces for the selective querying of the GIS by the simulation codes
will also be developed.
 Integrated Immersive Visualization (Kesavadas, Bloebaum, Patra, Jones): Software necessary to
visualize the huge volumes of data in an interactive immersive environment will be developed.
 Developing Geographically Remote Collaborative Environment (Bloebaum ,Winer, Sheridan): The
primary goals here will be a clear definition of the multiple remote users that need to communicate, their
needs and resources available to them. Integration with the simulations will also be started.
 Coordinating With Public Safety Officials (Sheridan, Vallance, Bursik, CENAPRED officials): This
activity is a second principal goal for the first year, requiring the set up and initiation of cooperative
efforts, and the definition of the information required for suitable public safety planning and formats and
procedures for making it available.
Year 2:
 Developing Realistic Simulation (Patra, Pitman, Bursik, Sheridan, Vallance, Jones): In the second year
the primary activity will be further development of simulation schemes, and development of schemes for
verification and validation.
13

Integrating GIS and Simulation (Mark, Patra, Jones): In the second year the primary task will be
developing techniques for integrating land use information from the GIS into the simulation output at
multiple resolutions. Appropriate interfaces for the selective querying of the GIS by multiple users will
also be developed.
 Integrated Immersive Visualization (Kesavadas, Bloebaum, Patra, Jones): This task will be one principal
effort of the second year. Testing and modification of the immersive visualization will be undertaken.
New schemes for selective decimation and multi-resolution visualization will be put in.
 Developing Geographically Remote Collaborative Environment (Bloebaum ,Winer, Sheridan): The
second principal task for year two, the primary goals will be a customization of the GIVE environment to
the needs of the multiple classes of users.
 Coordinating With Public Safety Officials (Sheridan, Vallance, Bursik, CENAPRED officials): A
minimally functional communication and interaction system will be designed and placed for colleagues
in CENAPRED, UNAM, and Colima.
Year 3
 Developing Realistic Simulation (Patra, Pitman, Bursik, Sheridan, Vallance, Jones): Further refinement
and testing of simulation schemes, and development of user interfaces.
 Integrating GIS and Simulation (Mark, Patra, Jones): Further integration of GI data, including social
and life systems (roads, water, electricity) for visualization.
 Final Integration of Simulation, Visualization and Collaboration: The primary activity here is the
integration of simulation, visualization and collaboration.

Testing by public safety personnel and geo-scientists locally and at remote locations: Simulation
codes, data
feeds, and collaborative work environments will be tested “live” public safety planners, geoscientists, field observers and other ‘end users’.
External Review by USGS and CENAPRED: We will convene a review panel comprised of experts from
USGS, CENAPRED and other stakeholders at the end of each year.
9. Results from Prior NSF Support:
A.Patra: CAREER Integrated Research and Education for Realistic Simulation Using HPC ASC9702947
$206,000+$25,000 02/97-03/02 In this project we are developing computational techniques, algorithmic
methodology, and mathematical, physical and engineering models to make efficient use of HPC hardware
in the context of two computationally challenging problems with great economic potential. The first is the
analysis of dental implants and supporting bone structure and the second is the modeling of complex timedependent 3D viscous flows. The primary research activity has been the development of a framework for
supporting parallel adaptive hp finite element methods, which can potentially deliver orders of magnitude
more accuracy for a given computational cost but require complex and dynamic grids. We have worked on
developing distributed dynamic data management schemes, dynamic load balancers, solvers and adaptive
strategies suitable for these schemes. Additional information and some of the publications arising this from
project may be found at http://wings.buffalo.edu/eng/mae/acm2e.
M. Bursik: Petrology & Geochemistry Program: Topographic Controls on the Generation of
Elutriated Ash Clouds I, II. EAR-9316656; EAR-9725361;$147,000+$186,000; 1/94-12/96 (extended to
12/97; renewed 1/98-12/01). The project is directed at understanding the mechanisms controlling the
formation of elutriated ash clouds (coignimbrite or 'phoenix' clouds) generated by pyroclastic flows.
Eruptions that generate substantial coignimbrite clouds are among the largest in the geological record, yet
little is known about the factors responsible for their formation. Laboratory experiments [Woods et al., 1998]
and theoretical considerations [Bursik & Woods, 1996] showed how pyroclastic flows might propagate and
interact with topography to form coignimbrite clouds. Field data from Mount St. Helens were found to be
consistent with deposition from a subcritical, dilute pyroclastic density current that would have generated a
coignimbrite cloud primarily at its distal margin [Bursik et al., 1998]. Course and Curriculum Development
Program: $75K+$99K; 3/95-12/98, 8/99-8/01. Two separate projects to develop interactive courseware for
the earth sciences. 1) a web-based interactive teaching database for advanced hydrogeology: the Mirror Lake
watershed, and 2) new geology laboratories: data acquisition, analysis, and multimedia modules using
interactive computer modeling of geologic phenomena (part II). Approximately one dozen undergraduate
14
students and two graduate students participated in this work over the years. (see the April, 2001, feature of
the Digital Library for Earth Science Education -- www.dlese.org).
C. Bloebaum: In August 1995, Dr. Christina Bloebaum received a Presidential Faculty Fellow
(PFF) Award to develop formal optimization methods for large-scale design (DMI 9553210, $500,000 from
8/95-7/2000, ‘Development of Methods for Multidisciplinary Design’). In addition to the development of
mathematically-based methodologies for the optimization and design of multidisciplinary systems, Dr.
Bloebaum initiated a new research area (for her) in visualization. The goal was to focus on technologies
centering around the Web and high-end visualization to better enable the development of methodologies to
aid industrial designers in intuitive and geographically distributed ways. In the spring of 1998, Dr. Bloebaum
(as Co-PI, with three other colleagues including Dr. Kesavadas, PI) received a grant of $129K (including cost
match from UB) from NSF to develop a visualization laboratory (CISE 9729853, “Development of Highperformance Real Time Visualization Laboratory”). Out of these grants came the groundwork for the
visualization aspect of the research supported by DMII in a grant with Dr. K. Lewis in 1998 ($185,917, 6/989/00, ‘Visualization as a Decision Support Tool in Multidisciplinary Design’). The success of these efforts,
all sponsored by NSF, provided the foundation for NYSCEDII, which was funded by New York State last
year at an initial investment of $2,500,000 (a pledge of $5,000,000 has been made for an additional capital
investment next year). Bloebaum is the Director of NYSCEDII and holder of the Chair for Competitive
Product and Process Design. Over 30 journal publications, conference presentations and conference papers
have resulted from previous NSF funding, and some 20 plus students have been supported and graduated.
B. Pitman: Modeling Particle Fluid Flows (Pitman) Since 1995, Pitman has been supported by the Division
of Applied Mathematics and Computational Mathematics, (DMS-9504433 and DMS-9802520) to study (1)
stability and well-posedness of constitutive relations for granular materials, including the influence of
randomness in developing fluctuations of stress, and (2) particle-fluid flows, especially very small particles
at high solids volume fraction, where, it appears, new kinds of constitutive assumptions need to be
considered. Pitman has also worked with Bursik on the experimental problem of particle flow down an
incline. In current work, Pitman is examining new models of high solids- volume-fraction particle-fluid
flows. He is developing a "first principles" particle-fluid code, a simulation methodology to solve for particle
motion is a fluid that does not model the inter-phase momentum transfer (by, say, a D'Arcy law), but rather
resolves that interaction as part of the computation by extending the ideas of Peskin's Immersed Boundary
Technique. In other work, Pitman has worked with colleagues on modeling and computing applications in
the renal hemodynamics of the nephron, the principal functional unit of the kidney (partly NIH DK 42091.)
M. Sheridan: Topographic Controls on the Generation of Elutriated Ash Clouds (Bursik and Sheridan)
EAR-9316656 $147,000 1/15/94-12/31/96(extended to 12/97). (see under Bursik for details). Physical and
Geochemical Interpretation of the 1964 Catastrophic Eruption of Sheveluch Volcano, Kamchatka (Sheridan)
EAR-9417605 $150,000 3/15/95-2/28/98. This work supported field research of the PI and his field team at
Sheveluch in 1995 and studies of other Kamchatka volcanoes. A study at Tolbachik Volcano (Doubik, Hill
and Doubik) emphasized the role of ground water/ basaltic melt interaction as a leading mechanism of the
white ash eruptions during the 1975 Tolbachik Fissure Eruption. Work at Ksudach focused on the study of
the most recent stage of the caldera evolution (Macias and Sheridan, 1995; Doubik and others, in
preparation). A comparative study of the products of the Plinian phase of the 1964 Sheveluch eruption and
the 1980 to 1995 dome shows that under shallow conditions the andesitic magma experienced active
crystallization during the 16-year interval. Debris deposit of the 1964 eruption contain juvenile andesite (the
fact that was previously unrecognized) the textural and petrologic features of which suggest that this material
underwent explosive decompression during the eruption. This grant supported the Ph.D. research of Philip
Doubik (1997) and partially supported the Ph.D. fieldwork of Carmelo Ferlito (1996) on the petrology of Old
Sheveluch Volcano (Ferlito and Sheridan, in preparation). Sheridan EAR-9901694, 1998 Volcan Casita
mudflow, Nicaragua. $5,485 11/07/98 to 12/31/99 This major disaster, associated with Hurricane Mitch,
which hit Central America in October 1998, was the great landslide that killed about 2,000 people living
below Casita Volcano, Nicaragua (Sheridan and others, 1999; Sheridan and Hubbard, in preparation).
Sheridan provided an unpublished report of this study to the Nicaraguan government together with a list of
12 recommendations to reduce the risk of future mudslides of this type.
15
Download