UNCC's presentation .

advertisement
Testbed for Mobile Augmented
Battlefield Visualization:
Summing Up
May 10, 2006
Publications in the Last Year
• Xin Zhang, Tazama Upendo St Julien, Ramesh Rajagopalan, William Ribarsky, Pramod
Varshney, Chilukuri Mohan, and Kishan Mehrotra . Dynamic Decision Support for Mobile
Situational Visualization. AppliedVis 2005.
• William Ribarsky, co-editor, Special Issue on Haptics, Telepresence, and Virtual Reality,
IEEE Transactions on Visualization and Computer Graphics (November, 2005).
• Justin Jang, Peter Wonka, William Ribarsky, and C.D. Shaw. Punctuated Simplification of
Man-Made Objects. To be published, The Visual Computer.
• Tazama St. Julien, Joseph Scoccinaro, Jonathan Gdalevich, and William Ribarsky. Sharing
of Precise 4D Annotations in Collaborative Mobile Situational Visualization. Submitted to
IEEE Symposium on Wearable Computing.
• Remco Chang, Thomas Butkiewicz, Caroline Ziemkiewicz, Zachary Wartell, Nancy
Pollard, and William Ribarsky. Using Urban Legibility to Produce Completely Navigable
Large Scale Urban Models. To be published, ACM SIGGRAPH 2006 Short Papers.
• Remco Chang, Thomas Butkiewicz, Caroline Ziemkiewicz, Zachary Wartell, Nancy
Pollard, and William Ribarsky. Hierarchical Simplification of City Models to Maintain
Urban Legibility. Submitted to IEEE Transactions on Visualization and Computer
Graphics.
• Xin Zhang, Tazama Upendo St Julien, Ramesh Rajagopalan, William Ribarsky, Pramod
Varshney, Chilukuri Mohan, and Kishan Mehrotra . An Integrated Path Engine for Mobile
Situational Visualization. To be submitted.
Matrix of Project Activities and Results
Proposed
What Was Done
Tasks
Multimodal
Development and evaluation of gesture
3D Interaction and voice interface; implementation
and use of new interface in mobile
environment.
Mobile
Implementation and initial evaluation;
Visualization
scenario development and evaluation.
4D Modeling
Dynamic,
Universal
Data
Structures
Interactive
Rendering
and
Visualization
Collaboration
and
Integration
Technology
Transfer
Done This Year
Integrated mobile
sitvis & decision
support
Automated tree modeling; initial
modeling of large collections; further
modeling of heterogeneous collections.
Paged object data structure for simple
buildings; scalable structure;
multiresolution buildings from multiple
sources.
Simple building LODs; view dependent, Urban legibility
appearance preserving methods; new
approach for very
punctuated simplification approach.
large collections of
buildings
Initial implementation of mobile
augmented testbed; use and
evaluation of testbed.
Work with Army, Sarnoff Corp., NRL;
New projects with
DHS.
Army and DHS;
possible new project
with DTO
Transitions
•Presented mobile situational visualization and its relation to homeland
security at invited talks at a special session of the AAAS meeting on the
National Visualization and Analytics Center (February, 2005), at
AppliedVis 2005 (May, 2005), and at an invited presentation for the DHS
Regional Visualization and Analytics Centers (January, 2006).
•Using work in urban terrain analysis begun here as a foundation, began
work on a project funded by ARO for eye-point dependent models applied
to terrain analysis and applications such as line-of-sight.
•Made a proposal for the DTO ARIVA project that will use, among other
things, urban infrastructure visualization. The proposal is now in Phase 2
evaluation.
•Established the Southeastern Regional Visualization and Analytics Center,
funded by DHS. Among other things, the SRVAC will be looking at critical
infrastructure simulations for disaster relief planning and emergency
response. The terrain visualization and modeling capabilities developed
here will be used.
Visualization and Analytics Centers
RVAC
University of
Washington
A Partnership with Academia, Industry,
Government Laboratories
Consortium
Scholars
IVAC
RVAC
Penn. State
RVAC
Stanford University
GVAC
RVAC
Purdue
University
DHS
RVAC
Univ. of North
Carolina Charlotte,
Georgia Tech
Detecting the Expected -Discovering the UnexpectedTM
http://nvac.pnl.gov/
www.pnl.gov/infoviz
NVAC: Pacific Northwest National Laboratory
Visualization and Analytics Centers
Alaska
Pacific
Rim
Hawaii
Canada
RVAC
University of
Washington
A Partnership with Academia, Industry,
Government Laboratories
Drexel
University
NY Emergency
Management
RVAC Port of Authority
Penn. State
Consortium
Scholars
Australia
IVAC
Europe
RVAC
Stanford University
New
Zealand
GVAC(s)
RVAC
Indiana Purdue
University
Univ.
School
of Medicine
DHS
NSF
RVAC
Univ. of North
Carolina Charlotte,
Georgia Tech
Bank of America
Detecting the Expected -Discovering the UnexpectedTM
http://nvac.pnl.gov/
www.pnl.gov/infoviz
NVAC: Pacific Northwest National Laboratory
Mobile Situational Visualization
Mobile Situational Visualization: An extension of
situation awareness that exploits and integrates
interactive visualization, mobile computing, wireless
networking, and multiple sensors:
• Mobile users with GPS, orientation sensing,
cameras, wireless
• User carries own 3D database
• Servers that store and disseminate information
from/to multiple clients (location, object/event,
weather/NBC servers)
• Location server to manage communications
between users and areas of interest for both
servers and users
• Ability to see weather, chem/bio clouds, and
positions of other users
• Accurate overviews of terrain with accurately
placed 3D buildings
• Ability to mark, annotate, and share positions,
directions, speed, and uncertainties of moving
vehicles or people
• Ability to access and playback histories of
movement
User equipped with mobile
situational visualization system
Mobile Situational Visualization
Buttons
Pen Tool
Drawing Area
collaborators
Collaboration
Example
Mobile Team
Shared
observations of
vehicle location,
direction, speed
Mobile Sitvis Collaborative Environment
• Everybody has a
location in space and
Location Server
time in the Virtual World
Traffic Server
• Geographic server
Annotation
lookup approach
Server
– Users
– Location Servers
– Data Servers
User
GeoData
Server
User
Weather
Server
User
Mobile Sitvis Collaborative Environment
• Everybody has a
location in space and
Location Server
time in the Virtual World
Traffic Server
• Geographic server
Annotation
lookup approach
Server
– Users
– Location Servers
– Data Servers
User
GeoData
Server
User
Weather
Server
User
Testbed Development
Georgia Tech campus
-Navigable environment
Accurate placement of
modeled buildings and
trees from multiple
sources
x
Detailed urban component
Realistic urban
buildings
Scenarios and Results
Scenario 1
A commander, out of sight of his unit, directs it. He
creates waypoints and paths through the mobile sitvis
system that individuals in the unit move to.
Scenario 2
Three individuals in a unit track a moving subject. They
must keep it in sight and coordinate their tracking
activities.
In both scenarios, working with the full mobile sitvis
system is compared with a traditional method (individuals
with only radio communication and maps). Tracked
subjects followed paths (set for about equal length and
number of turns) that were not known to unit members.
Scenarios and Results
Scenario 2: History of all users’ locations and annotations over a 45
minute session. (Tracked subject is in red.)
Scenarios and Results
What did we find?
•Mobile sitvis works!
•It does as well as traditional method (radio + map) for
tracking a moving subject.
•It is better than traditional method for command
operations that direct multiple units.
•It provides significant new capabilities:
-Significantly more accurate location than GPS alone.
-Specific digital annotations in space-time that can be
shared immediately.
-Overviews of several moving, annotated entities that
can be understood all at once
-Histories for tracking and analysis
•This work suggest new scenarios of greater impact.
Mobile Situational Visualization +
Dynamic Decision Support
Collaboration with
the Syracuse team
Visualization
Decision Support Module
4D Geospatial Server
3D Interaction
User Input
Sensors
Mobile Situational Visualization +
Dynamic Decision Support
Mobile Situational Visualization +
Dynamic Decision Support
We have fully integrated the dynamic decision support engine
with mobile situational visualization, providing the following
capabilities:
•a structure for shared interaction and collaboration among
mobile users,
•general methods for heterogeneous spatiotemporal sensor
organization and display,
•a decision support module supporting activity recognition,
response planning, and behavioral modeling that is integrated
with the mobile visualization structure and will accept the
mobile users as collaborating agents,
•a prototype mobile situational visualization system that
employs the decision engine to produce meaningful
responses in one or more urban scenarios.
Dynamic Decision Support: Grid-Based Approach
Djikstra’s single source, single destination shortest path
algorithm is used.
•The region to be traversed is laid out in a grid, balancing
computation cost (no. of grid cells) versus accuracy (edges
or nodes in the same grid cell).
•Edge relaxation is used to choose vertices. Vertices
connected by valid edges are considered, and those with
the best value of a quality metric, Q, are chosen.
•A probability risk model is applied. A simple zero mean
Gaussian distribution with a finite range is used to model
point risks. Default values are given for grenades, rifles, etc.
•If multiple risks overlap in a grid cell, the multiple threat is
computed as [1 - (1 - P1) x (1 – P2) x …… x (1 - Pn)] where
there are n threats.
Dynamic Decision Support: Grid-Based Approach
Red boundary shows selected area in
dense urban area (mid-town Atlanta).
Mobile Situational Visualization +
Dynamic Decision Support
Our initial urban scenario is
route planning under dynamic
threats. Threats of different
extents and risks are sighted,
placed, and shared by mobile
users. The decision engine
provides a real-time path that
balances on a continuous
scale between risk and
shortest path (where the
mobile user can select the
balance).
Green path (highlighted with green
dots): optimal (in this case low risk) path
for selected balance between risks and
path length.
Mobile Situational Visualization +
Dynamic Decision Support
The mobile decision engine
produces fast, accurate, and
usable results
Safe route
Some risk
Shortest
route
Future Directions
•Combine dynamic route planning with line-of-sight to
take into account obstructions in determining risk.
•Scale up to larger areas.
•Take into account moving risks or risks that change in
other ways.
•Support other decisions.
Interactive Visualization of Very Large Urban Spaces
•How can one freely navigate very large urban spaces?
- A medium size city can have several hundred
thousand buildings; a large city can have millions of
buildings.
- Even if the simplest building models are rendered,
there could still be an overwhelming amount of
geometry and textures.
What should be rendered?
Apply knowledge from urban
planning: urban legibility.
View of Xinxiang, China with over 26,000
buildings.
Interactive Visualization of Very Large Urban Spaces
•Urban legibility embodies concepts from urban
planning about what makes an urban space
understandable and more easily navigable. (For
example, depict the city around the concepts of paths,
edges, districts, nodes and landmarks.)
•Can we find an automated way to embody these
concepts and thus keep the city legible (and
recognizable) at all scales?
Interactive Visualization of Very Large Urban Spaces
Original (textured) District
Simplification with our method
Yes, we can shape our
automated urban analysis
to embody the urban
legibility principles.
Simplification with Qslim
Our simplified model with textures applied
Interactive Visualization of Very Large Urban Spaces
Skyline at full resolution
Skyline with 7% polygons
Landmark preservation
View-dependent
rendering
Full resolution
Interactive view with 18% polygons
and greatly simplified textures
Perceptual errors are not very noticeable because
conceptual structure (i.e., what’s important) is retained.
View-Dependent Rendering of
Very Large Collection
Q
Q
•Hierarchical multiresolution
organization
•View-Dependent LOD for
large collections of 3D
models
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
N Levels
Linked Global
Quadtrees
Q
Bounding box
Viewpoint
Screen
Selected LOD
Knowledge Visualization: Very Large Urban Spaces
Video
Organizing Large Collections of 3D Models for
Interactive Display
•Merging of different types and formats
•Automated replacement of lower
resolution duplicate structures
Common format and organization
for different types
Q
Q
Q
Q
Q
Q
Linked Global
Quadtrees
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Paging, Culling, and Fast Rendering
Block
Block
Block
Block
Q
Q
Linked global quadtree
Block
Q
Q
Out-of core
Storage
Hierarchical, Multiresolution Organization
Quadtrees
LODs
Hierarchical, Multiresolution Organization
Quadtree interior nodes
Quadtree leaf nodes
model1
collection1
collection2
tree2
sg3
collection3
Urban Legibility
Collections of simple geometry
Detailed Hierarchical
Simplification
Questions?
www.viscenter.uncc.edu
Publications from Previous Years
• Ernst Houtgast, Onno Pfeiffer, Zachary Wartell, William Ribarsky, and Frits Post. Navigation and Interaction in a MultiScale Stereoscopic Environment. IEEE Virtual Reality 2004.
• Nickolas Faust and William Ribarsky. Integration of GIS, Remote Sensing, and Visualization. Invited paper, Proc.
Remote Sensing 2003 (Barcelona, 2003).
• William Ribarsky, editor (with Holly Rushmeier). 3D Reconstruction and Visualization of Large Scale Environments.
Special Issue of IEEE Computer Graphics & Applications (December, 2003).
• David Krum, Olugbenga Omoteso, William Ribarsky, Thad Starner, and Larry Hodges. Evaluation of a Multimodal
Interface for 3D Terrain Visualization. pp. 411-418 IEEE Visualization 2002.
• Justin Jang, William Ribarsky, Christopher Shaw, and Peter Wonka. Appearance-Preserving View-Dependent
Visualization. IEEE Visualization 2003, pp. 473-480.
• William Ribarsky, Zachary Wartell, and Nickolas Faust. Precision Markup Modeling and Display in a Global Geospatial
Environment. Proceedings SPIE 17th International Conference on Aerospace/Defense Sensing, Simulation, and Controls
(2003).
• William Ribarsky. Virtual Geographic Information Systems. The Visualization Handbook, Charles Hanson and
Christopher Johnson, editors (Academic Press, New York, 2003).
• Zachary Wartell, Eunjung Kang, Tony Wasilewski, William Ribarsky, and Nickolas Faust. Rendering Vector Data over
Global, Multiresolution 3D Terrain. Eurographics-IEEE Visualization Symposium 2003, pp. 213-222.
• Peter Wonka, Michael Wimmer, Francois Sillion, and William Ribarsky. Instant Architecture. Siggraph 2003, pp. 669678 (2003).
• Tony Wasilewski, William Ribarsky, and Nickolas Faust. From Urban Terrain Models to Visible Cities. Vol. 22(4), pp.
10-15, IEEE CG&A (2002).
• David Krum, Rob Melby, William Ribarsky, and Larry Hodges. Isometric Pointer Interfaces for Wearable 3D
Visualization. ACM CHI 2003.
• William Ribarsky, “Towards the Visual Earth,” Workshop on Intersection of Geospatial Information and Information
Technology, National Research Council (October, 2001).
• William Ribarsky, Christopher Shaw, Zachary Wartell, and Nickolas Faust, “Building the Visual Earth,” to be published,
SPIE 16th International Conference on Aerospace/Defense Sensing, Simulation, and Controls (2002).
Publications from Previous Years
• David Krum, William Ribarsky, Chris Shaw, Larry Hodges, and Nickolas Faust. Situational Visualization. pp. 143150, ACM VRST 2001 (2001).
• David Krum, Olugbenga Omoteso, William Ribarsky, Thad Starner, and Larry Hodges. Speech and Gesture
Multimodal Control of a Whole Earth 3D Virtual Environment. Eurographics-IEEE Visualization Symposium
2002. Winner of SAIC Best Student Paper award.
• “Acquisition and Display of Real-Time Atmospheric Data on Terrain,” T.Y. Jiang, William Ribarsky, Tony
Wasilewski, Nickolas Faust, Brendan Hannigan, and Mitchell Parry, Proceedings of the Eurographics-IEEE
Visualization Symposium 2001, pp. 15-24.
• “Client-Server Modes of GTVGIS,” Nick Faust, William Ribarsky, and Frank Jiang, Vol. 4368A, SPIE 15th
Annual Conference on Aerosense (2001).
• “Hierarchical Storage and Visualization of Real-Time 3D Data,” with Mitchell Parry, Brendan Hannigan, William
Ribarsky, T.Y. Jiang, and Nickolas Faust, Proc. SPIE 15th Annual Conference on Aerosense 2001, Vol. 4368A.
• “Semiautomatic Landscape Feature Extraction and Modeling,” Matthew Grimes, Tony Wasilewski, Nickolas
Faust, and William Ribarsky, Proc. SPIE 15th Annual Conference on Aerosense (2001), Vol. 4368A.
• “Real-Time Global Data Model for the Digital Earth,” William Ribarsky, Nickolas Faust, William Ribarsky, T.Y.
Jiang, and Tony Wasilewski, Proceedings of the INTERNATIONAL CONFERENCE ON DISCRETE GLOBAL
GRIDS (2000).
• Development of Tools for Construction of Urban Databases and Their Efficient Visualization,” Nickolas Faust and
William Ribarsky, Modeling and Visualizing the Digital Earth, Mahdi Abdelguerfi, Editor (Kluwer, Amsterdam,
2001).
• Computers & Graphics, Special Issue on Data Visualization (Vol. 24, no. 3, June, 2000), Editors Eduard Groeller,
William Ribarsky, and Helwig Loeffelmann.
Students Who Worked on Project
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Remco Chang
Tom Butkiewicz
Caroline Ziemkiewicz
Xin Zhang
Justin Jang
Tazama St. Julien
David Krum
Olugbenga Omoteso
Jaeil Choi
Weidong Shi
Guoquan (Richard) Zhou
Eunjung Kang
Brendan Hannigan
Mitchell Parry
Matthew Grimes
Ernst Houtgast
Onno Pfeiffer
Joseph Scoccinaro
Jonathan Gdalevich
Download