Modelling Spatial Video as part of a GIS Video-Analysis

advertisement
Modelling Spatial Video as part of
a GIS Video-Analysis Framework
Paul Lewis
Introduction
Viewpoint Implementation
It is now common for video; real-time and collected, mobile and static, to be
georeferenced and stored in large archives for users of expert systems to
access and interact with. In the ground-based terrestrial context georeferenced
video is also becoming more commonly collected and accessed in recent
years. The collecting, storing and distributing of these data are now more easily
achieved through the development and affordability of dedicated systems such
as Mobile Mapping Systems (MMS) , the StratAG XP1 experimental platform is
shown in figure 1. Such MMS are being used for infrastructural monitoring and
mapping because they can now generate high accuracy geospatial data.
Using standard photogrammetric methods from camera calibration literature,
[1][2], and used in a general case like this we implement the following steps to
generate a Viewpoints extents:
Camera Model (table 1)
Equation
The Circle Of Confusion Calculation Operation
(COC) measurement is the Circle of Confusion
COC  sd / ac pc
maximum permissible blur (COC)
circle for an image and
d
A  2 arctan
directly affects the Depth- Angle-Of-View (AOV)
2f
Of-Field (DOF) calculations.
2
f
Depth-Of-Field (DOF)
Dh 
f
nc
Hyperfocal
Distance
An Angle-Of-View (AOV)

represents the camera lens
properties as an arc angle.
Near Limit DOF
Figure 1: StratAG Mobile Mapping System (MMS) XP1
experimental platform.
However, significant problems exist for access to large volumes of archived
georeferenced MMS data in a GIS context, particularly Video which is the
subject of this poster. These include the semantics behind the modelling of its
spatial content and the development of a computationally efficient query model
that can isolate video sequences of geographical interest. This poster outlines
a georeferenced-video framework where a GIS-viewshed oriented approach to
modelling and querying terrestrial mobile imagery is detailed. Technically, this
viewshed model implements a subtle and more flexible optimisation of the
Open Geospatial Consortiums (OGC) Geo-Video (GVS) ViewCone data type,
however, it does define some fundamentally different properties. This in turn
has enable a significantly different spatial context to be defined for the
geographic space that terrestrial mobile video captures by optimising it in terms
of areal coverage and perceptible depth.
Spatial Video Data Model
A view frustum model, calculated
from the camera parameters, forms
the basis of the geographical extent
representation of each spatial video
frame (or static sequence). The
OGC GVS implementation uses
this principle in a, restricted, two
dimensional
ViewCone
data
structure. Our Viewpoint approach
(figure 2) has lead to a distinct
change in the overall GVS data
structure from a single polygonal
representation of space to a
capture point and disconnected but
associated image-space polygon
representation set. The camera
image-capture
location
is
represented by a point while the
polygon defines the geographical
space, as the optimized focus
range, for the image space of each
frame. Figure 3 shows a software
tool for semi-supervised fitting of
spatial video to a Viewpoint data
view in a GIS. In this tool both the
GIS space and the camera/image
properties can be adjusted to
achieve an accurate geographical
space representation.
Dnh 

Dh
1
2  nc 
1  
f 

A DOF Hyperfocal Distance
Near Limit DOF + View Limit
is a measurable distance in Far Limit DOF
front of the camera lens
from which point to infinity Table 1. Implemented Camera Model Operations
Spatial Extrapolation Model
the DOF extends.
The automated process performed
Spatial Video Survey data
by the software shown in figure 3 is
loaded onto PC.
highlighted here in figure 4.
1. Adjust the GPS coordinates to
Video Data:
Spatial Data:
Converted to an easily indexed
decode GPS NMEA messages
be coincident with the principle
format
point of the camera image plane.
Spatial Video Viewpoints
Store
2. Calculate
an
adjusted
Decode Video Frame Index ID's
Model (Figure 2)
Video
Hyperfocal Sharpness Distance
to the eight Viewpoint plane
Automatic Database
Population process
intersection points, figure 3.
GIS Query and Analysis of
3. Calculate an adjusted azimuth.
Spatial Video Player/Viewer
Viewpoints DB
4. Calculate adjusted altitudes of
Figure 4: . Spatial video postthe Viewpoint plane intersection
survey processing flowchart.
points.
5. Use these results to solve the 3D geodetic forward algorithm as defined in
Vincenty, [3].
Results and Applications
In figure 5 we show one result from the large system calibration testing. While in
figure 6 we highlight one GIS analysis operation on a large spatial video data
warehouse. In this case a buildings footprint data set is used to generate higher
accuracy Viewpoints by intersecting both GIS constrained spatial data sets.
Figure 2: 3D Spatial Video
Viewpoint.
Viewpoint
Line
Point
Point to Line Distance in Meters
Result
C to D
X
0.798mtrs
Inside
C to D
W
0.026mtrs
Inside
A to B
Y
1.066mtrs
Outside
A to B
Z
0.051mtrs
Inside
Figure 5: Plan view of Viewpoint fitting in
test scenario showing accuracy results.
Figure 6: Using buildings footprint
spatial data to generate higher
accuracy Viewpoints.
References
Figure 3 Software module for semisupervised spatial video Viewpoint
fitting.
1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55.
www.bobwheeler.com/photo/ViewCam.pdf.
2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with Applications in
GIS). McGraw-Hill Higher Education, 2000.
3. Vincenty, T. Direct and Inverse Solutions of Geodesics on the ellipsoid with
application of nested equations. Survey Review 23, 176 (1975), 88-93.
Research presented in this poster was funded by a Strategic Research Cluster Grant (07/SRC/I1168) by Science Foundation Ireland under the National Development Plan. The authors gratefully acknowledge this support.
Download