Total Vertical Analysis Tool - site

advertisement
EIVA NaviModel3
Total Vertical Uncertainty Analysis Tool
Determining the TVU of Multi- and Single-beam Surveys
Lars Dall, EIVA A/S
EIVA NaviModel3 - TVU
Contents:
•
•
•
•
Total Vertical Uncertainty in IHO SP-44
The Implementation of the TVU analysis tool in NM3
TVU analysis, a Case Story
Summary
Total Vertical Uncertainty in SP-44
In Special Publication number 44 (SP-44) from IHO, the concept of
Total Vertical Uncertainty (TVU) is defined as follows:
• The component of total propagated uncertainty (TPU) calculated in the
vertical dimension. TVU is a one-dimensional quantity
With TPU (Total Propagated Uncertainty) defined as:
• The result of uncertainty propagation, when all contributing measurement
uncertainties, both random and systematic, have been included in the
propagation
Total Vertical Uncertainty in SP-44 II
SP-44 further states that:
• Vertical uncertainty is to be understood as the uncertainty of the reduced
depths. In determining the vertical uncertainty the sources of individual
uncertainties need to be quantified. All uncertainties should be combined
statistically to obtain the TVU
Total Vertical Uncertainty in SP-44 III
SP-44 further states that:
• Recognizing that there are both depth independent and depth dependent errors that
affect the uncertainty of the depths, the formula below is to be used to compute, at the
95% confidence level (1.96 * σ), the maximum allowable TVU. The parameters ‘a’ and
‘b’ together with the depth ‘d’ have to be introduced into the formula in order to
calculate the maximum allowable TVU for a specific depth:
 a2  (b * d)2
• Where:
• a represents that portion of the uncertainty that does not vary with depth
•
b is a coefficient which represents that portion of the uncertainty that varies with
depth
•
d is the depth
•
(b * d) represents the portion of the uncertainty that varies with depth
Total Vertical Uncertainty in SP-44 IV
Order
Special
1a
1b
2
Description of areas
Areas where under-keel
clearance is critical
Areas shallower than 100 m
where under-keel clearance is
less critical but features of
concern to surface shipping
may exist
Areas shallower than 100 m
where under-keel clearance is
not considered to be an issue
for the type of surface
shipping expected to transit
the area
Areas generally deeper than
100 m where a general
description of the sea floor is
considered adequate
Maximum allowable THU,
95% Confidence level
2m
5 m + 5% of depth
5 m + 5% of depth
20 m + 10% of depth
Maximum allowable TVU,
95% Confidence level
a = 0.25 m
b = 0.0075
a = 0.5 m
b = 0.013
a = 0.5 m
b = 0.013
a = 1.0 m
b = 0.023
Full Sea floor Search
Required
Required
Not required
Not required
Feature Detection
Cubic features > 1 m
Cubic features > 2 m, in
depths up to 40 m; 10% of
depth beyond 40 m
Not Applicable
Not Applicable
Recommended maximum Line
Spacing
Not defined as full sea floor
search is required
Not defined as full sea floor
search is required
3 * average depth or 25 m,
whichever is greater For
bathymetric lidar a spot
spacing of 5 * 5 m
4 * average depth
Positioning of fixed aids to
navigation and topography
significant to navigation. (95%
Confidence level)
2m
2m
2m
5m
Positioning of the Coastline
and topography less
significant to navigation,
(95% Confidence level)
10 m
20 m
20 m
20 m
Mean position of floating aids
to navigation,
(95% Confidence level)
10 m
10 m
10 m
20 m
Implementation of TVU Analysis in NM3
The Total Vertical Uncertainty analysis tool in NaviModel3 has been
designed to facilitate a determination of the quality of a hydrographic
survey. The part of the quality that is investigated is the TVU. The
hydrographic data to be tested could either be acquired by means of
single-beam or by multi-beam techniques
The basis of the analysis is a base model that must be superior, in terms of
TVU, to the survey-spread that is to be tested. In the present context, the
base model is termed the ‘Reference Model’. The survey-spread to be
tested is termed the ‘Test Survey’
Implementation of TVU Analysis in NM3 II
Bearing in mind that the Total Vertical Uncertainty analysis tool is based
on comparing a surveyed line against a reference model, it is of the
outmost importance to attain a good, superior reference model. This can
be achieved in different ways:
• By utilizing a superior survey configuration (calibration, acquisition method,
instrumentation etc.)
• By surveying the area with multiple lines and in a variety of directions
• By thoroughly cleaning and editing the data acquired
The ideal method would be to carry out a combination of the three
Implementation of TVU Analysis in NM3 III
In NaviModel3 this would mean that data for the reference model that has been
acquired with superior accuracy and coverage, is meticulously cleaned, by
utilizing the best possible combination of manual and automatic cleaning
techniques. Further, in order to arrive at the best possible base for the testing,
the cell size of the reference model should be adequately small (≈ as small as
possible (with sufficient data in all cells)). With smaller cell-sizes, undesired
influences from a generalization of the observed data into cells with only one
attribute value, is minimized
Implementation of TVU Analysis in NM3 IV
Data for the testing, the test survey, must be acquired with a well calibrated, yet
typical survey spread. It is recommended that the test survey is not run parallel
to any of the runlines associated with the reference model. In the postprocessing phase, the test data should also be cleaned, in order to remove
insignificant influences from outliers and gross errors.
To prepare for the testing the test survey must be exported from NaviEdit, in a
special format, named ‘Ascii XYZ,Angle,Quality’, once the required editing and
cleaning has been completed.
Implementation of TVU Analysis in NM3 V
Once the reference
model is loaded into
NaviModel3 and the
test survey has
been created, the
Total Vertical
Uncertainty analysis
can be performed as
follows:
Implementation of TVU Analysis in NM3 VI
The TVU test contains, apart from general information, three subsets:
• Relative to IHO (SP-44), all three orders (default is on)
• Relative to USACE standards (default is off)
• User defined order test, based on requirements similar to IHO (default is off)
It is also possible to define the opening angle (of the test survey) to be
investigated as well as the number of bins for visualization in the graphical
histogram representation
Implementation of TVU Analysis in NM3 VII
The TVU general data is comprised by:
• Statistical results from comparison between reference
model and test data (mean diff., std. max, min, 95%
confidence etc.)
The report associated with each subset contains:
• IHO standards: Calculation of limit and test result for all
orders (with predefined values for a and b)
• User Defined Test: Calculation of limit and test result,
relative to user defined values for a and b
• USACE: statistics associated with the ‘Engineering and
Design for Hydrographic Surveying’, dated April, 1, 2004
from US Army Corps of Engineers. The standards state
some requirements to 95% confidence figures associated
with surveys for navigation channels and dredging support
(results are in US Survey feet)
Implementation of TVU Analysis in NM3 VIII
The histogram window is divided in three:
• The Confidence plot shows the quality of beams as a function of the beam angle. The Xaxis is the beam angle (in degrees), whereas the Y-axis depicts the associated error with
the grey area being the depth of 95% of all beams. The blue line visualizes the mean error
• The Error probability histogram shows the error distribution of all beams within the selected
angle limit. The X-axis depicts the error in meters, whereas the Y-axis and shows probability
• The Beam count histogram shows the probability distribution of a given number of beams in
the reference model. The X-axis shows number of beams, whereas the Y-axis gives the
probability in percent of any given number
Implementation of TVU Analysis in NM3 IX
Reports associated with the Total Vertical
Uncertainty analysis can be generated
Also plots associated with each of the
histograms can be created
TVU Analysis, a Case Story
The case story is based on a recent project, where the main requirements (of
relevance), were specified in the ‘Scope of Works’ as follows:
• Maximum allowable TVU within 95% confidence level:
a = 0.05 m, b = 0.002
(expected depth of 20 m (d) equals a limit of 0.064 m (95% confidence))
• Minimum ‘resolution’ requirement:
40 soundings/m2
• The processed and gridded bathymetry (the DTM) shall have a grid resolution of 0.2 m
• At least 95% of all nodes (cells) shall be populated with at least 3 soundings
TVU Analysis, a Case Story II
The survey equipment system set-up chosen to meet these extremely strict
requirements, comprised the following instrumentation:
• 3D positioning: POS MV 320
(primary system) & Javad
Delta 3GT (secondary
system)
• Gyro Compass: POS MV 320
• Motion sensor: POS MV 320
• Multi-beam Echo-sounder:
Reson 7125 (with online
Reson SVP 70)
• SoundVelocity probe: Reson
SVP15
TVU Analysis, a Case Story III
Further, in order to meet requirements:
• Only instruments and software solutions with the highest performance are used. The methods
proposed have been chosen in order to take full advantage of the superiority of the hard- and
software parts of the system. The total overall specifications of the proposed system can thus
be regarded optimum with respect to commercially available software and instrumentation
• Survey sensors are physically mounted on a
rigid pole. Variations in offsets and mount
angles due to deformation of the structure
can thus be neglected. Also impact on
uncertainties on offsets measurements can
be regarded insignificant
• The MBE swath width is limited to +/- 45 º, to
minimize errors from un-modeled ray
bending and grazing angle
• The system performance is based a high
accuracy system calibration (calibration will
not effect the overall system performance)
• The system performance is based on valid
‘up to date’ sound velocity profiles
TVU Analysis, a Case Story IV
In order to substantiate, on an á priori basis, that the accuracy-requirements can
be met, with the proposed equipment, error-budgets have been prepared for the
vertical as well as for the horizontal component.
In the error-budgets, errors originating from the specifications of the instrument
are random as opposed to the systematic errors originating from calibrations.
These systematic values, given in the budgets are directly linked to the
acceptance criteria of the calibrations
A few reservations have been made in the TVU budget, shown below, the
majority of these are based upon the fact, that the TPU-values are linked to a
relative approach
TVU Analysis, a Case Story V
• It is assumed, that the time-delay is 0 ms (the time-tagging on the multi-beam data and
on the position data is configured so that NaviPac/NaviScan and the MBE are using the
same, accurate time reference
• The accuracy by which the on-line system is capable of tagging the sensor-data timewise is optimized. The latency value(s) can hence be set to 0
• The heave value is corrected in the post-processing phase (delayed heave from
POSMV), whereby contributions originating from absolute drift are minimized
• The multi-beam accuracy is the accuracy by which a single beam can be determined in a
totally controlled environment
• Sound Velocity determination is related to the accuracy by which the sound velocity
profile in the water column can be determined. Influences from Sound Velocity
interpolation is ignored, since it is assumed, that the characteristics of the water column is
(close to) identical to those determined
• Influences from offsets, geoid model, reference station coordinates all have a relative
contribution of 0
TVU Analysis, a Case Story VI
The 95% confidence values have been calculated according to the following formulas:
 Total 

1.96 *  Total  1.96
2
Normaldistributed

  ESystematic
2
Normaldistributed
  ESystematic
TVU Analysis, a Case Story VII
The surveylines were covering the survey area (thick green line) in an East/West
and in a North/South direction respectively, with a line-spacing of 15 m. In addition
2 times two extra lines cross the area from corner to corner. All survey lines were
run twice, in opposite directions. This yields a 800% coverage as the basis for the
reference model(s) of the TVU analysis, in accordance with
the overall
requirements to such a model. Further on the data and the model:
• The data (reference model as well as test
survey-data) has been thoroughly
cleaned and edited in the post-processing
phase
• The reference model was based on
approximately 65 million depth
observations in an area of a little more
than 200 * 200 meters. This is equivalent
to more than 1000 observations per
square meters. Similarly, the test
survey(s) consisted of almost 1.5 million
depths, in the full swath
TVU Analysis, a Case Story VIII
The TVU analysis was performed on identical observation sets, in a variety of ways,
in order to investigate the effect of employing different post-processing methods:
•
•
•
•
•
Different cleaning methods (manual and S-can based)
Different approaches to bathymetric (RTK-height) smoothing
Different approaches to heave correction
Post-processed versus online 3D position
Different cell-size in reference model
1.
2.
3.
4.
5.
6.
7.
Real time heave, unsmoothed bathy
Real time heave, smoothed bathy
Delayed (true) heave, unsmoothed bathy
Delayed (true) heave, smoothed bathy
Pospac merged positions with heave, unsmoothed bathy
Pospac merged positions with heave and smoothed bathy
Pospac merged position without heave
For each method, two models were generated, one where data had been the
subject of manual cleaning and one where S-can cleaning had been performed.
Investigations regarding the consequence of changing the cell size was furthermore
done with one of the model types.
TVU Analysis, a Case Story IX
The first analysis was done on the basis of models that
were generated from real time heave, with the bathy value
not being the subject of smoothing. Two test surveys were
investigated for both systems, one in a relatively flat area
(N017) and one that included the excavation (N01). From
the results below, it is clear that for the flat area
investigation the results are indicators of the vertical
uncertainty only, whereas the result from line N01 includes
position error as well, since a large position change will
involve a large depth change in the dredged areas.
Observe furthermore how the S-can cleaned models have a TVU-value that is
approximately 10% better than the manually cleaned models. Also observe how
the results that include a 2 * 60 º opening angle are only slightly less accurate
than those from a 2 * 45 º coverage.
TVU Analysis, a Case Story X
The bathy value is arrived at (online in NaviPac/NaviScan as well as offline in
NaviEdit) by subtracting the Heave value from the RTK-height, in order not to
compensate twice for the high frequency movements (the heave). Normally the
bathy value is then smoothed in order to remove undesired high frequency noise,
with the noise thought to be originating from the GPS. As can be seen here, the
accuracy is actually deteriorating after the smoothing, thus indicating that the RTK
height values are OK.
In the present situation, with an integrated system, the RTK height and the motion
data are originating from the same source, with the same time tagging reference. In
such a situation smoothing of the bathy is not improving the vertical accuracy.
TVU Analysis, a Case Story XI
The next analyses are made on the basis of delayed heave values from the POSMV.
These are generated by the POSMV with a delay of 3 minutes. The NaviPac POSMV
interface that is distributing the POSMV data received to NaviPac and to NaviScan
can be configured to log this information, to be subsequently be merged in NaviEdit. A
recalculation of the bathy value must then take place (re. previous slide).
Surprisingly, the results appear close to those associated with real-time heave. It must
however be expected that in a situation with rougher weather and with accelerations
perpendicular to the sailing direction, delayed heave might involve an improvement
relative to the real-time heave.
TVU Analysis, a Case Story XII
The final analyses were made based on gyro, motion and position data originating
from the POSPac software. Processing was performed on the basis of raw online data
and Rinex data logged at the 5 reference
stations in the area. Processed data
were subsequently merged into NaviEdit
as new position, motion, GPS height and
gyro data. Subsequently the bathy value
was recalculated.
TVU Analysis, a Case Story XIII
For the POSPac based models compared to the flat area, no improvements are seen.
For data including the excavation a substantial improvement can be observed. This is
most likely caused by improvements in the position on the POSPac data.
Again, smoothing of the bathy is not improving the vertical accuracy and the S-can
cleaning yields better results than the time-consuming manual cleaning.
TVU Analysis, a Case Story XIV
Pospac without heave is based on modelling the vertical movements on the basis of
RTK-height only. As can be seen this processing method gives the best results in all
situations.
TVU Analysis, a Case Story XV
Pospac without heave would essentially mean that the heave components should
be removed from the vertical error-budget.
TVU Analysis, a Case Story XVI
The final investigation shows the consequence of changing the cell size in the
reference model. Whereas the previous comparisons were based on 10 cm grid
cells, 5 and 20 cm were used for the comparisons here.
The 20 cm cell models result in all situations in higher TVU values, because of the
undesired influences from the generalization of the observed data that a DTM
expresses.
The 5 cm cell models on the other hand can be considered closer to the raw
data and will thus give lower TVU results. The danger is that the models and
thereby the analysis becomes statistically weak when decreasing the cell size
to a value that will result in a low observation population in the cells.
TVU Analysis, a Case Story XVII
The population in different
cell-size models are
visualized here. As can be
seen, the 5 cm cell-size
model appears to be on the
limit when it comes to
statistical significance for
the TVU analysis.
5 cm cell-size
10 cm cell-size
20 cm cell-size
TVU Analysis, a Case Story XVIII
TVU Analysis, a Case Story XIX
Beam counts can be performed relative to an entire model or relative to a boundary.
The beam count data can be saved to pdf or to a csv-file for further manipulation.
TVU Analysis, a Case Story XX
The requirement to number of beams per m2 can be visualized in the DTM window
of NM3 by using the Colourmode ‘Density’ (below right). The beam count statistics
can also be applied to this requirement, by using a 1 m cell-size model (below left)
TVU Analysis, Summary
The Total Vertical Uncertainty analysis tool:
• Is designed for testing of fulfilment relative to requirements to TVU (IHO SP-44 etc.)
• The reference model must be superior to the test survey with respect to hard- and
software as well as acquisition method, observation density, cleaning method etc.
• The cell-size of the reference model must be adequately small in order to minimize
undesired influences from a generalization of the observations into a single cell value
• The TVU Case Story:
• The effect of employing different post-processing method was investigated on similar
datasets. All results fulfilled the extremely strict requirements and were better than the
ápriori error budgets
• S-Can cleaning provides significant improvements to TVU compared to manual cleaning
• Smoothing the RTK-based bathy value does not improve the TVU when RTK height and
motion data originate from identical source, with identical time tagging reference
• Using delayed heave data does not give improvements to TVU relative to online heave
when acquiring during calm conditions and when sailing without abrupt turns
• POSPac data can be used to optimize TVU by using GPS-height only (no heave)
• Cell size must optimized for statistical significance for the TVU analysis
• Other:
• Tools to investigate beam count, data density etc. are integrated into NaviModel3
Further Information
EIVA Training and Documentation Site:
http://download.eiva.dk/online-training/index.htm
EIVA Knowledgebase:
http://kb.eiva.dk
Frequently Asked Questions:
http://download.eiva.dk/online-training/TOC_Eiva_Software.pdf
Tutorial on the TVU Analysis Tool in NaviModel3:
http://download.eiva.dk/online-training/TVU_TOOL_NM3.pdf
EIVA NaviModel3, Total Vertical Uncertainty Analysis Tool (this document):
http://download.eiva.dk/online-training/NaviModel3%20manuals//Total_Vertical_Uncertainty_Analysis.pdf.
Press Releases at Hydro International:
• Release of NaviModel3 DTM Software, February 2010:
http://www.hydro-international.com/news/id3697-NaviModel_DTM_Software.html
• Release of NaviModel3 DTM Software , version 3.2, September 2011:
http://www.hydro-international.com/news/id5036-New_Version_Navimodel.html
Download