SnowDepth_UAV

advertisement
Snow depth reconstruction using UAV-based Lidar and Photogrammetry
Ben Vander Jagt1, Michael Durand1, Arko Lucieer2, Darren Turner2, Luke Wallace2
bvanderj@gmail.com
AGU Annual Conference– Dec, 2013
Moscone Convention Center
San Francisco, CA
1
1. Introduction
A. Unmanned Aerial Vehicles (UAV’s)
Remote sensing technology has improved a great deal in recent decades and the miniaturization
of sensors and positioning systems has paved the way for the use of Unmanned Aerial Vehicles
(UAVs) for a wide range of environmental remote sensing applications. . The datasets produced
by UAV remote sensing are at such high detail that characteristics of the landscape can be
mapped that are simply not distinguishable at the resolutions generally obtainable via manned
aircraft and satellite systems. Furthermore, the ease of deployment and low running costs of the
UAV systems allows for frequent missions providing very high spatial and temporal resolution
datasets on-demand.
Figure 1: Common UAV platforms, such as multirotor helicopters (left) can be used to produce high quality re
mote sensing products using off-the-shelf imaging cameras(middle) and low cost lidar (right). These platforms
can be operated remotely via radio link, and/or autonomously using an onboard navigation system.
B. Remote Sensing of Snowpack
Snow is a principle component of the hydrologic budget in many parts of the world, thus being
able to make measurements of snow parameters over a spatially continuous area has both civil
and scientific merit. The scale of spaceborne measurements often presents unique challenges
due to subpixel variability of different variables which contribute to the measurement (e.g. micr
owave remote sensing). In situ measurements, while accurate, do not capture the spatial hetero
geneity of the snowpack.
Figure 2: Typical snow depth measurements are discrete in nature (rather than spatially continuous), and often
expose field personnel to environments with risk factors including avalanches and extreme weather,
2. Study Area and Datasets
Our study site was located in Mount Field National Park, Tasmania, near the summit of Mount
Mawson. We chose the site because it is characteristic of typical alpine environments with
steep slopes, high winds, and deep snow pack.
School of Earth Sciences, The Ohio State University
2 School of Geography, University of Tasmania
2. Methods
A. Collinearity Equations
The collinearity equations relate the measured image coordinates in the 2D camera
coordinate system to those in the “real world” 3D Cartesian coordinate system in the
following manner.
æ xim
ç
çè yim
æ
r11 (X A - X0 ) + r21 (YA - Y0 ) + r31 (Z - Z 0 )
x +f
ö ç o
r13 (X A - X0 ) + r23 (YA - Y0 ) + r33 (Z - Z 0 )
ç
÷=
÷ø ç
r12 (X A - X0 ) + r22 (YA - Y0 ) + r32 (Z - Z 0 )
ç yo + f
r13 (X A - X0 ) + r23 (YA - Y0 ) + r33 (Z - Z 0 )
è
ö
÷
÷
÷
÷
ø
xim , yim - image coordinates of conjugate points.
ri, j - Elements of rotation matrix describing the camera orientation parameters
X0 ,Y0 , Z 0 - Object space coordinates of camera exposure station
xo , yo , f - Interior orientation parameters of camera (known from calibration)
X A ,YA , Z A - Object space coordinates of target (snow surface)
B. Space Intersection
The method for determining the 3D location of different target points from image measurements is
known in conventional analytical photogrammetry as
the space intersection. Assuming the position and
orientation of the camera at the time of exposure is
known for a stereopair (i.e. using a GPS/IMU
solution), the 3D “real world” position of an
identifiable feature can be computed, provided that
image coordinates are measured in a minimum of two
photographs. The standard model used for this
calculation is based on the well-known collinearity
equations, which were briefly described above. The
expression simply relates measured image
coordinates of an object A to the 3D object space
coordinates of A.
Figure 4: A diagram of the photogrammetric space intersection
is shown above. Using a minimum of two images with
covisible points (top right), the 3D coordinates of the snow
surface can be determined (bottom right).
C. Lidar and Lidar Equation
Lidar is a remote sensing technology that measures distance by illuminating a target with a
laser and analyzing the reflected light. Accurate ranging can be accomplished by recording
the time-of-flight. The position and orientation of the scan system is need to construct and
orient the point cloud. Accurate timing is arguably the most important component of lidar
data collection
3. Results
A. 3D Point Clouds from Lidar and Photogrammetry
While the output of the lidar equation are points directly mapped into a 3D object space coordinate system, the point cloud derived from photogrammetric techniques requires additional
processing. The camera poses and a sparse point cloud is first produced as output from the
bundle adjustment. Once the camera poses are estimated, a dense point cloud can be generated
by iteratively matching all covisible pixels in the images via the epipolar condition, and
calculating the object space coordinates via the collinearity equations.
Figure 6: Point clouds of the snow surface at different stages of processing. The lidar derived clouds are imme
diatedly available at their densest resolution(left), whereas the photogrammetric derived cloud is transformed
from sparse to dense after the bundle adjustment is performed.
B. Accuracy Validation with and without ground control
While there exists enough texture in the images of the snow covered ground to determine
feature points and run the subsequent bundle adjustment, the point clouds themselves are of no
use, unless they are accurately geo-registered to the ground surface. To validate, we measured
the coordinates of ground control targets in the image, and compared the true values to those fo
und in both the lidar and photogrammetric point cloud.
Figure 7: Errors in the different observation methodologies when
compared to ground control points measured with GNSS.
C. Simulated Depth Measurements (Based on accuracy assessment)
Unfortunately, due to calibration issues and travel obligations, there hasn’t been a snow free
data collection at the field site. Therefore, we can only simulate what the errors in depth would
look like based on our accuracy assessment. Because the vertical errors were biased, any differencing should in fact remove the bias, leaving true snow depth.
.
Figure 8: Plots of the true vs. esti
mated depths are shown for photgrammetry (left) and lidar (right)
Plots are shown with (blue) and
without (green) the bias removed.
rm = rm,INS + R mINS (R INS
L × rL + bINS )
rm -3D coordinates of object point in mapping frame
rL -3D object coordinates in the laser frame
Figure 3: Inset of Mount Field Nat’l Park, located in South Central Tasmania (left). Also shown is an orthomosaic of our field site(right). The total size of our study area was approximately 1 hectare.
Datasets
Lidar and digital images were collected over the study area during two flights. The position an
dorientation of the lidar and camera were observed and time-stamped using a dual-frequency
GNSS receiver fused with a IMU. The navigation solution consisted of a loosely-coupled
Sigma Point Kalman Filter. The GNSS observations were differentially post-processed to
yield estimated coordinate accuracies in the 2-4 cm range at the antenna.
We used commercially available off-theshelf products which are widely available
to demonstrate the practicality of such a
platform for snow depth retrieval.
Table 1: Manufacturer, model, and estimated cost of sensors used in this study.
R mINS -Rotation matrix from INS to mapping frame
R -Rotation matrix from laser to INS frame
bINS -Lever arm offset in INS frame
rm,INS - 3D coordinates of INS in mapping frame
INS
L
Figure 5: A diagram of the lidar measurement. If the position,
orientation, and time is known, 3D points of the surface
can be determined. The accuracy of the point cloud is
dependent on the quality of your GPS/IMU solution.
C. Snow Depth Observations
To validate our methodologies, we measured snow depth at 37 spatially distributed locations
within our study area. Using RTK GPS, we first measured the snow surface at a point, after
which the ground surface was measured. The snow depth was calculated by differencing the
two observations. The measurement accuracy is ~3cm RMSE.
5. Conclusions
This poster has outlined the methodology that one could employ to generate accurate spatially
continuous estimates of snow depth from low-cost UAV-acquired imagery and lidar. Point
clouds have absolute accuracies in the range of 10–19 cm, depending on the technique used.
Relative accuracies are much higher, and we believe the bias is resulting from system
calibration error.
Acknowledgment
This study was funded with an NSF East Asia and Pacific Studies Institute (EAPSI) fellowship, award # OISE-1310711. The author
wishes to personally thank his colleagues at University of Hobart for their hospitality, time, and effort. This study would not be
possible without their support. We also wish to acknowledge Nora May for the use of several figures used in the poster.
References
1. May, N. A Rigorous approach to comprehensive performance analysis of state of the airborne mobile mapping systems. Ph.D
dissertation. The Ohio State University 2008.
2. Kraus, K. Photogrammetry: Geometry from Images and Laser Scans, Volume 1. 2nd Edition. Walter de Gruyter, 2007
3. Lowe, David G. "Object recognition from local scale-invariant features."Computer vision, 1999. The proceedings of the seventh IE
EE international conference on. Vol. 2. Ieee, 1999.
4. Wallace, L., Lucieer, A., Watson, C., & Turner, D. (2012). Development of a UAV-LiDAR system with application to forest invento
ry. Remote Sens, 4(6), 1519-1543.
Download