Network Coupling Tool: Main Page

advertisement

Main Page

Related Pages

Packages

Classes

Network Coupling Tool Documentation
This is method documentation for the stretch based network coupling prototype tool developed for the
Danish Road Directorate. The primary operation is to match a single- or dual-stringed continuous road
from end to end to a complete digital infrastructure.
The tool is developed to work with stretches from the Vejman data set identified by the road number
to the NavTeq digital map.
This document describes the algorithm applied to calculate the coupling.
The first part introduces the terminology used for the rest of the document. This is followed by an
overview walk-through description of the matching process. The latter sections describe details on
specific operations like join modification, path finding and projection calculations. The final section
describes how to run the program and all available options for the execution.
Terminology
This program works by extracting a single road from the vejman data set provided by the Danish
Road Directorate. This is called the source network and consists of sections or source features.
The road or source stretch is matched to a base network. In this case from the NavTeq digital network
over Denmark. This network consists of links or base features.
The basic idea is to cover the source stretch with base features. Due to the structure of the source
stretch the result is not one single set of base features that provides the full cover. Instead a cover for
every single source feature or section is found.
The cover for a source feature is not just a set of base features, but a structured data set. For every
base feature in a cover information on offset and fractions for both the source and the base feature
are stored. This is required for visualization and computational reasons. The documentation on
projection calculation covers these terms in detail.
The smallest entity in the source and base network is a feature. A feature represents a single
uninterrupted piece of infrastructure. It may be a longer stretch of road with no intersections or
simply a short dual digitization detail on entries and exits from round-abouts.
Each feature is constructed by a set of properties. At least one of the properties is the geometry
containing information on form and fix points. Instead of using full detail when comparing the
geometries the envelope can be used instead with significant speed up as a result. The envelope of a
geometry is the smallest axis parallel rectangle that contains all fix points for the geometry. Other
properties are road name, leg number, allowed traffic and traversal direction. The full documentation
on the Navteq digital network contains an exhaustive list of available properties for the base network.
The source network contains far fewer properties. One of the last sections in this document entitled
"Importing data from vejman" contains the full property description for the source network.
Requirements and limitations
The method presented here is developed for application to sparse source networks and denser base
networks. The essential point is that a high percentage (preferably more than 90) of the source
features can be covered by base features by a feature by feature method. Subsequent application of
join fitting and path reconstruction will the ensure proper coupling for the remaining features.
The method is not presently prepared for handling large base networks discrepancies where multiple
sequential source features are not represented in the base network. Nor will the method work as
expected for large geometric translations or illogical topological differences between the source and
the base network.
The present application to a single vejman road at a time shows only a few cases where the vejman
source data is incorrect, a single case where the automatic matching has been disabled and 6 cases
where the coherency check fails. The estimated failure rate for the entire extracted source network is
less than 0.1 percent.
The next section describes the overall processing
The works
The work performed for every road is divided into one pass on the base network and two passes on
the source network.
1. A grooming cache generation pass, where the base network is sifted for relevant features.
2. An individual section pass, where each source feature is processed and
3. A set coherence pass, where coherent features in the source network are compared using the
found base network covers.
Once the set coherence pass is completed the result is trivially transformed into a simple
representation.
The cache generation pass
This pass is performed to increase the search speed in the subsequent passes. The intent is to build a
reduced cache from the base network instead of searching the entire network for every source
section.
The process is implemented by finding the source road extent envelope. The envelope is then
extended by 250m in every direction. Every single feature in the base network is then compared to
this envelope.
In the following illustration the black lines depict the dense base network, the red line the source
network stretch and the purple rectangle the search envelope.
Candidate selection envelope
If the base feature envelope overlaps with the grooming envelope the feature is inserted into the
spatial cache.
This process also ensures that the coordinate reference systems are the same. This is handled by
transforming the extended envelope to the base coordinate reference system before the comparison
and transforming every base feature to the source coordinate reference system before inserting it into
the base index.
The explicit details for base network grooming can be found in the section on "Provider sensitive
grooming".
From this point only the groomed base index is used for processing.
The individual section pass
The individual section pass considers every single section from the source network individually.
For each section a set of operations is performed to find the best coverings from the base network.
Candidate selection
The envelope for the source feature is extended by 20m and used to find all possible candidates in the
spatial index. The extension by 20m is performed to ensure that parallel base features for near axisparallel source features (with narrow envelopes) are included.
Once all base feature candidates have been selected each of the candidates is compared to the source
feature. This comparison is described in the Feature Comparison section.
Candidate grooming
Each candidate is then inspected based on several criteria. The first set of criteria are provider
independent:
 Hints can be used to exclude specific base features from matching specific, some or all source
features. In the present implementation this is used only once to alleviate an improper match
due to a slight geometric translation discrepancy shown in the following illustration.
For the an the following illustrations the black lines are the base features, the red lines the
source features. The green crosses are the fix points, the squares are the source network
intersections, and the red polygon is a of a possible coupling.
Groomed by hint
A hint is used to ensure that the depict coupling cannot be a result of the processing.
 No fraction covered discards the base feature from the candidate set. The result of the
projection indicates that the candidate cannot cover any part of the source feature.
Base feature does not cover
source feature
 Max end closeness indicates if the distance at the ends of the coupling is beyond a fixed limit
(set to 50m).
Projection is to far from source
feature
 Curve likeness measures beyond 0.6 are discarded as non-similar.
Curvature is to different
 Out of reach is a more detailed criteria introduced to reduce the number of irrelevant short
features for the matching process. A base feature is discarded based on this criteria if the
length of the base feature is less than the distance to the source feature.
Furthermore the fraction of the source feature covered must be less than 0.3.
This criteria has been introduced on a theoretical basis for specific constructed tests. The
actual impact of the rule is unknown and presently expected to be insignificant. It is solely
documented as part of the running code.
The second set of criteria are provider dependent.
 Travel direction can be used if provided in the base feature set. The comparison of source and
base features calculates the traversal direction on the base feature. If this direction of
traversal is prohibited the feature is discarded. NavTeq data contains information on this
criteria in the DIR_TRAVEL property for every feature.
Traveling in wrong direction.
Grooming on other features that are not source feature dependent are described in the section on
"Feature grooming".
This is the basic grooming procedure. If the set after the grooming is empty the best removed
candidate is re-added and the resulting singleton candidate set is used for further processing.
The code has a specific section for specialized grooming for large sets, but this has not yet been
required and no specific functionality has been implemented.
The groomed set is then passed to the selected covering method (Matcher). The result of the matcher
is the cover of the source feature.
Cover refining
As the projections calculated from the feature comparisons may be imprecise due to computational
precision the cover is re-fitted such that all intermediate sections in the cover are fully utilized. This is
done by sorting the cover based on the comparison source offset and then overriding the fraction of
the base feature to 1.0. This is only applied if the route coherence matcher is chosen.
In the following illustration the top part shows the coverings before the cover fitting. The cover gaps
can easily be identified as the large white triangles between the purple coupling polygons. The bottom
part shows the coupling after the cover refining. The white triangles are now minimized and no
intermediate part of the base features is uncovered.
Fraction correction for covers
When this is completed for all source features the set coherence can be checked and improved.
The set coherence pass
This pass inspects every single feature, but not individually as the previous step. Both beginning and
end intersection are inspected for coherent coupling.
The documentation on "join modifications" covers the details of each operation. This section will only
describe the overall considerations.
If no other features to or from an intersection is found the current feature covering is tightened to the
respective end.
If there is a single or more connected features the covering of the features are retrieved. The join
modification procedure is the applied to ensure optimal coupling on the intersection.
If the base features are not directly connected and the join modification is unsuccessful in finding a
valid coupling then a path finding method (described in the Path Finding section) is applied to
generate a connecting path of base features. If a path is found join modification is applied to insert
the found base features into the current and preceding covers.
If neither join modification or path reconstruction is successful the source feature coverings are left
unchanged.
The final work in the coupling calculation is simply to transform the coverings to an accessible
representation. During this transformation statistics are collected and numerous warning and error
criteria are checked.
Details
The following sections describe the different operations to greater detail.
Comparing features
The precision in comparing two features is essential. Comparing two features is not just done by
simple full geometric comparison. Every single digital network may have different sectioning
constraints or rules. There is therefore no guarantee that a strict one-to-one, one-to-many or manyto-one coupling exists for any non-trivial subset of the networks.
This problem is handled by using projections instead of the actual full geometric extent of the
features.
The first operation, however, is to decide the traversal direction on the base feature. The question is
whether or not moving the digitization direction on the source feature also moves the closest point on
the base feature in the base feature digitization direction.
The following illustration shows some examples of direction detection cases. The top row are all
detected as forward travel direction. The bottom row is detected as reverse direction. The last
example is perpendicular and direction of travel is not relevant. In the illustration the vehicles travel
from the start (the rounded end) to the end (the arrow end) of the features.
Direction comparison illustration
This information is crucial as it is used to detect illegal traversal on candidates for the cover finder.
The second operation is to project the end points of the features correctly onto each other and finally
estimate the offsets and fractions covered for both source and base feature.
Detecting traversal direction
The detection of travel direction is done by inspecting the start and end points of the source feature
and the base feature.
From these four distance measures between the features are calculated:




The
The
The
The
source
source
source
source
start to base start measure: |s->s|
start to base end measure: |s->e|
end to base start measure: |e->s|
end to base end measure: |e->e|
The traversal direction is then detected based on the below logic on these measures.
|s->s| < |s->e|
|e->s| < |e->e|
Direction
true
true
false
false
true
false
see below forward reverse see below
If a trivial decision cannot be made two maximum measures are used
 Maximum distance from source feature start to start or end of base feature: |max_start|
 Maximum distance from source feature end to start or end of base feature: |max_end|
The non-trivial tie is then broken by selecting the traveling direction that minimizes the maximum
point to point distance at both ends of the traversal.
 |max_start| > |max_end|: travel direction is forward if and only if |s->s| < |s->e|
 |max_end| => |max_start|: travel direction is forward if and only if |e->e| < |e->s|
In the example below it is not the trivial cases where the ends correspond pairwise either in same or
opposite direction. Both ends of the red source feature is closer to the start of the base feature.
The traveling direction is then found by inspecting the |max_start| and [max_end] measures. In this
case the |max_start| measure is the longest and the travel direction is then decided by the |s->s| <
|s->e| inequality. As this inequality is fulfilled the traveling direction is forward.
Projecting end points
The calculation of optimal projections is done by considering the end points of the geometries for the
source feature and the other or base feature.
The projection is calculated as an offset and a fraction. The offset indicates the distance from the
traversal start and the fraction indicates how much of the feature that is covered by the coupling. It is
guaranteed that the sum of the offset and the fraction is not greater than 1.
To avoid a reducing projection cycle the projection is restricted to having a zero offset on at least on
of the features and an offset and fraction sum of 1 on at least one of the features. In other terms this
prevents rounding errors and bending features from repeatedly projecting reductions of the coupling.
First the starting point of the other or base geometry is projected onto the source geometry. If the
offset for this projection on the source feature is not zero the source starting point is fixed to the
projected point and the offset on the other feature is fixed to zero.
Then the end point of the other or base geometry is projected onto the source geometry. The fraction
is then calculated from the offset of this point projection and the previously found offset. If the sum of
the source fraction and offset for this projection on the source feature is not 1.0, the source end point
is fixed to the projected point, and the fraction on the other feature is fixed to one minus the other
offset. This ensures that at least the base feature or the source feature is covered to the end.
If the other or base feature start point has not been fixed the next step is to project the source
feature starting point onto the base feature. This is trivially done by a nearest point calculation as
above. The offset and other starting point is then set accordingly to this projection.
If the other or base feature end point has not been fixed the final step of the projection calculation is
to project the source end point onto the other or base geometry. The other end point and fraction is
then set accordingly.
For some rare cases the geometry of the source or base feature cannot be handled with simple point
projections. This is specifically the case with self-overlapping features. In these cases a more
cumbersome projection method is applied.
Complex tracking is applied if the features for the comparison are self overlapping. This is uses at
stepwise traversal to find the longest and tightest match between the features.
Essentially the shortest feature is lined up along the longest feature and simple offset search is used
to find the offset providing best curve likeness and end closeness.
Comparison results
Each comparison is rated based on two measures. The curve likeness and the end closeness.
Curve likeness
The curve likeness is a measure of how alike the curvature of the projected roads are. The measure is
based on a set of unit vectors from each geometry. The number of unit vectors is set by the Parts
property.
The unit vectors are traversed in the source traversal direction. The length of the difference vector of
each pair of vectors are accumulated and averaged.
The average pairwise difference is the curveLikeness. The less the better. The range of the
curveLikeness is [0..2]. This interval is determined by the maximum and minimum difference between
the end points of two Origo fixed unit vectors. A curve likeness of 0.0 is complete curvature match
and a score of 2.0 is exact opposite curvature. The score 2.0 is only theoretically possible as the
traversal detection applied earlier will result in a reverse traversal yielding a curve likeness of 0.0.
Direction unit vectors.
End closeness
The end point closeness is the maximum of the minimal end point projection distances to the other
geometry
Covers
A cover is a structured set of data that provides the entire information on the coupling for a single
source feature to any number (even zero) base features.
To compare covers three aggregated measures are used. These are
 A. Coverage is to percentage of the source feature that is covered by this cover
 B. Maximum end closeness is the maximum projected end closeness for all base features in
the cover
 C. Maximum curve likeness is the maximum curve likeness for all base features in the cover
These measures are used in combination to determine if one cover is better than another.
The measures are ordered differently. The higher the coverage the better, but the lower the maximum
end closeness or maximum curve likeness the better.
To avoid discarding almost similar solutions based on a rounding or insignificant difference (in eg.
coverage) delta intervals are introduced.
If the measure difference for two different covers are within this delta value they are considered equal
on this measure and the next measure is considered. If the covers are equal on all measures
respecting these delta values a second pass of all the measures in the same order with strict
comparison is used to break the tie.
Obviously if the covers are still equal the selection of best cover is irrelevant. Any one of the two
covers are best.
The default comparison order is A, B and then C.
The corresponding delta values are 0.02(2 percent), 5 (meters), 0.1(5% of the [0..2] span). These
values have been set to ensure that minor (or close to irrelevant) difference or computational
inaccuracy in one measure does not shadow a grave difference in another measure.
Finding a cover
A cover is a set of base features that covers any fraction of a source feature. The base set may be
empty whereby the fraction of the source feature that is covered obviously is zero. It may also be the
full set of all base features where by the fraction covered hopefully is 100%. However, this cover will
have an excessive overlap and will thus not correctly correspond the single source feature in the base
network.
How to find a good cover
As mentioned above any subset of the base features is a cover for the source feature. We wish to find
a good cover and ultimately the best cover. We need a full, tight and non-overlapping cover. To find
this we need the information found when comparing the base and source features. Furthermore we
need the set of base feature candidates to find the cover in. From this information different strategies
can be applied to find a good and valid cover.
Incumbents
Incumbents are solutions to a problem that are used while finding a better solution. If no better
solution is found once the calculation ends the best incumbent is the best solution unless a better
solution has erroneously been discarded.
The full set of all solutions is called the solutions generated by enumeration. This set, however, is
huge. In this case for 10 candidates the count of all solutions are 1024. For 11 candidates the number
is 2048. The growth is exponential and at 20 candidates there are 1.048.576 solutions. We are only
interested in a few of these solutions and the cover finders (or matchers) in this code return the best
2 solutions encountered. These are likely to be very close when compared, but are distinctively
different in base feature usage with at least one base feature.
The next section describes the branch and bound algorithmic approach and the specific
implementation used here. The following two sections describes alterations to the basic branch and
bound method.
Branch and bound based covers
A generic description of the branch and bound method can be found at
http://en.wikipedia.org/wiki/Branch_and_bound.
The specifics for this implementation is in the splitting or branching procedure and the bounding
methods used for pruning.
Branching procedure
In this implementation each subset is described by two lists of candidates. A list of included
candidates which must be in the solution and a list of excluded candidates that are not allowed to
enter the solution. The subset definition that defines the full set of solutions is simply the set where
no candidates are forced; included or excluded. Splitting any set is then done by selecting any nonforced candidate. Based on this an inclusion subset and an exclusion subset can trivially be formed.
This is a disjoint separation of the original set due to the binary nature of strict inclusion or exclusion.
The basic candidate selection method is simply selecting the candidates one-by-one in no particular
order. This is applied in the simplest matcher.
In the following illustration the red links are excluded, the green included and the blue valid
candidates for the split operation.
Valid next candidates for simple
branching
Bounding methods
This implementation uses several different pruning approaches instead of the single bound used in the
generic method description. This is implemented to prune branches as early as possible and
effectively reduce the number of recursive method invocations and memory allocations.
For any subset the first operation is to build the forced included cover and maximum possible cover.
The maximum cover is used to make the best possible cover from the non-excluded candidates
ignoring overlap and comparison measures for the non-included candidates. This cover is compared to
the incumbent. If the incumbent is superior to the maximized cover neither this nor any cover in a
subset from this branch can provide an improving solution. The branch is thus pruned. If the present
forced included cover covers more than 99.9% of the feature, the solution is automatically accepted
as a valid solution and compared to the incumbents. No further branching is performed and the
remaining free candidates for this subset are ignored.
At this point the next candidate must be selected. If there are no more valid candidates the subset is
a leaf and considered a solution valid for comparison to the incumbent indifferent of the actual cover
fraction
If there is a valid next candidate a inclusion set and an exclusion set is created. The inclusion set is
now inspected for introduced overlap and extended cover. If the introduced overlap is more than 1%,
the included set is discarded. Similarly, the included set is also ignored, if the cover is not extended at
all.
The forced exclusion subset is always inspected, as the first check for a subset is the best possible
cover.
The connectivity is loose and traversal feasibility is not required.
Connected covers
These are also found based on a branch and bound approach. The branching candidate selection is
however restricted in comparison to the selection in the simple method above.
For every subset in this calculation it is only valid to branch on links connected to the already included
set. If the included set is empty in a branch calculation the next not excluded candidate is used.
This approach is specifically effective at reducing the number of valid subsets for larger overlapping
and sparsely connected candidate sets.
The computation impact for checking connectedness is small considered the increased reduction in
inspected subsets and individual solution coherence
Valid next candidates for connected branching
Routing based covering
This is similar to the connected matcher. However, it requires not only that the next candidate is
connected to the included set. The next candidate must be a valid route in the projection direction.
This reduces the number of valid subsets even further and reduces the checking complexity to end
point inspections
This method is superior both in computation time and solution precision for most cases. However, the
digital networks are known to have significantly different representations and both the simple and
connected matcher may in a few extraordinary cases provide better or more valid covers
Valid next route based candidates
The next sections cover specifics used for set refinement and data import
Modifying joins
The modification of interfeature joins is applied to ensure tight and proper joins in the base network.
The basic join operations are fitting to an end, an intersection, on a feature and inserting a path of
features between two covers.
Fitting to an end
Any cover can be fitted to an end simply by ensuring that the base feature in the requested end of the
cover to fit is fully utilized in the appropriate end. This is not just handled by setting the used fraction
to 1.0, but has to consider the actual fraction and offset on the base feature. Especially for single
feature covers this inspection is vital.
The following illustration contains two examples of the fitting operations. The original covering only
covers part of the base feature (bottom black line). Fitting to the beginning will modify the purple
coupling polygon to include the left light purple triangle (assuming that the base feature starts on the
left). This will ensure full coverage from the start of the base feature. Likewise, fitting to the end will
include the right light purple triangle.
Fitting to an end
Advanced fitting
This section is divided into three parts; a) simple fitting with one ingoing and one outgoing segment,
b) multiple segment fitting where the sum of ingoing and outgoing segments is more than two, and c)
path fitting which can be applied for disjoint segment sets.
Simple one-to-one fitting
Intersections between source features do not necessary result in tight matches in the base network.
The basic operation of fitting an intersection can be separated into two distinct classes.
1. Fitting on a base feature where the source feature intersection is on a base feature more
than some specific threshold from a base feature end point.
2. Fitting on to an base intersection where the corresponding intersection on the base features
is closer than the threshold to a base feature intersection.
In case 1 the covers are checked for overlap or gap. Any gap or overlap is then corrected by simple
symmetric modifications to the covers.
Fitting on a base feature
In case 2 the modifications are skewed to ensure that the source intersection is coupled directly to the
base network intersection. The threshold is presently fixed to 10 meters.
Fitting to an intersection
Multiple segment fitting
If the one-to-one fitting above is not applicable multiple segment fitting is used instead. For this
section we introduce the term a bundle for a combined set of covers. Thus there is a ingoing bundle
and an outgoing bundle.
This fitting uses aggregated topological information to find a common intersection for the bundles and
then fits all covers of a bundle to the common intersection. This method requires a high degree of
logical consistency both in the source and the base network, but is not based on any direct geometric
information, that is not already provided by the individual roadcomparer instances in the actual
covers.
The first check is simply to find the closest intersection to the relevant end of the covers in the
bundles. This is extracted by inspecting the direction and offset for the matched segments. If all have
the same most likely intersection this is selected as common intersection for the fit operation. All
covers are then modified to match this intersection. For the incoming bundle all covers are modified at
their ends whereas the outgoing bundle is modified at the beginnings.
Due to regular and expectable discrepancies in the network representations the simple approach
above is not sufficient for complete operation. The next step in the operation is to apply a bayesian
voting mechanism to select the most likely intersections and then solving a pairwise set partitioning
problem on the pairs in the bundles. If the set partitioning method results in only a single set based
on one specific intersection this is selected for fitting. If on the other hand the result is multiple
disjoint sets the source intersection is determined to be multiply digitized in the base network. The
bundles are then split in sets according to the set partitioning solution and the fitting operation,
simple or multiple segment, is repeated for each set individually.
Path fitting
This operation is used if the covers for the source features are not on the same or connected base
features. This is a special and rarely applied method.
Every single feature of the path is compared to the source features. If the path feature projection
covers any part of the source feature the path feature is added to the beginning of the source cover.
Otherwise, the path feature is added to the end of the connected cover.
Finding paths
If necessary the program can find a shortest path in the base network to connect two base features.
The applied method only searches paths up to length 2 and uses a full enumeration approach on all
base features in a buffer area covering the gap between the two features to connect.
Search envelope for path finding
First all individual base features are tested. If any of these directly connects the two disconnected
features the shortest one is returned. If no direct link is found the search is repeated with two links
instead. Again the shortest connection is selected.
If no connection is found a warning is issued and the path search returns an empty result. This mode
of operation is selected as any distance beyond two links in the automated coupling is most like due
to inaccuracies in other calculations.
Hints
Some situations may arise where the method requires specific input or information that is not
available from any of the data sources.
The hinting system is generically built to provide information on invalid links or coupling to specific
links. This has at present only been encountered at a single location where a specific source section
may not be matched to a specific base link.
Due to this sparse usability, hints are manually found and hard-coded into the program.
Importing data from vejman
The features that can be generated from the Vejman data are:
 Sections, stretches of infrastructure between intersections
 Fixpoints, points provided from the vejman interface for positioning the generated features
 Intersections, polygons generated from the rectangle intersection information provided by
vejman
 Centroids, single point intersection intrapolated from the corner points of the corresponding
intersection polygon.
Generated point or single entity data
The fixpoints, centroids and intersections are all based on a single entity of data from Vejman.
Fixpoints are retrieved as a list of individual xml nodes. This is also the case when generating
intersections or centroids.
The only difference in centroids and intersections is that the intersections are represented by polygon
geometries, which extract their extent from the Vejman data. Centroids are a point geometries which
simply represented the equally weighted center for the intersection polygon.
The visualization does not show zero extent polygons, which many Vejman intersections are. It is thus
advised to use the centroids for visualization and the intersections for modeling purposes.
Construction algorithm for sections
Sections cannot be retrieved directly from Vejman, but have to be reconstructed from intersection and
fixpoint data.
This requires a strict reconstruction method to ensure correct data representation.
The Vejman data set uses road part numbers to identify road segments as well as mileage
(kilometrering) and leg numbers on intersections. All these has to be reverse engineered to generate
the correct source feature corresponding to the Vejman section.
We apply a state based machine applied on a specifically sorted list of intersections to rebuild the
features from the Vejman data set.
The sorting is arranged to ensure that all intersections independent on legs are traversed in increasing
order. This is crucial to the state based traversal. The method rebuilds the features by following these
steps for every intersection in the sorted order:
1. For every part number {0, 1, 2} count the number of legs.
o If there are zero legs for the part number. Ignore the part number for this intersection.
o If there is a single leg for the part number commence or end a feature. Either by setting
the start of the road part to the current intersection or by generating a feature and
clearing the start intersection for the road part.
o If there are two legs with the part number generate a feature from the current start and
set the new start of the road part to current intersection. If the start is not set - issue
a structural warning.
o If there are more legs issue a structural warning and clear the start intersection for the
road part.
A feature is created by extracting the provided fix points from start intersection to current intersection
for the specific road part and then generating a geometry and corresponding properties for this.
Importing data from NavTeq
The data for the base network is simply read directly from the shapefile. There is no special import
handling and only the provider specific grooming while populating the working cache actually inspects
the data.
Feature grooming
Whenever a geospatial index (an index specially constructed to perform geometric extent searches) is
built this program applies a grooming operation to reduce the size of the index. The grooming
operation consists of a geometric accept criterion and a set of provider specific semantic criteria.
The geometric criterion
This criterion ensures that the features in the index are within reasonable distance or placement
regarding the expected searches in the index. This is implemented by providing the grooming
operation with an extent envelope which all accepted feature's envelopes must overlap.
Groomed and accepted feature examples
Provider specific criteria
The presently implemented support is for an external provider is for NavTeq only. The grooming of
NavTeq features is based on the following properties:
Property
Description
FERRY_TYPE Ferry classification
AR_AUTO
Discard criterion
FERRY_TYPE==B
Automotive traffic allowed AR_AUTO==N
Reference systems
Each geometric feature has a corresponding reference system. For the source and base data these
may differ. The grooming operation also ensures that all features are represented using the same
reference system. Otherwise all matching and comparison operations will return wrong results.
All base features' geometries are transformed to the source coordinate reference system during the
grooming process.
Running the tool
This application has several execution modes and the options provided for invocation are described in
the Configuration section.
The tool it self uses little more memory than the actual digital network. This is retrieved in full before
it is groomed and the usage of the -Xmx options for the java runtime is recommended.
A setting of approximately 5-8 times the base network shape file size should be sufficient. Activating
any visualization mode doubles this requirement.
Even though the tool is capable of accepting work on many roads (-road 100-600) at once, most java
engines cannot keep up the garbage collection and performance will degrade at some point. When the
garbage collection uses more than 98% of the execution time the runtime environments throw an
exception to signal this. It is strongly discouraged to disable this exception even though some
environments allow this. If the garbage collections gets stressed it will not recover unless to program
is pixel close to completing. Disruptive testing shows that the garbage collection can go beyond
99.99% of the execution time. The remainder of the program may run more 10.000 times slower than
the already completed part.
Tests show that running up to 25 roads at every execution incurs no significant performance impact.
Some servers and java implementations may be able to handle many times this amount of roads.
Options
option
Syntax
Description
cache
-cache {PATH-TO-CACHE}
Use caching of VD-data
centroids
-centroids
Display point centroids for vejman
intersections
disableSetRefinement -disableSetRefinement
Disable set refinement
export
-export {basename}
export source features and matching
polygons to shape files
fixpoints
-fixpoints
Display fixpoints from vejman
intersections
-intersections
Display polygon intersection extent from
Vejman
matcher
-matcher {Bnb|Connected|Route}
Select internal cover finder. See
descriptions below (Route is default)
navteq
-navteq
Show navteq segments as polylines
navteqPath
-navteqPath {path to navteq shapefile} File to get navteq shapes from
nowork
-nowork
Forcibly disable all active calculations
progress
-progress {Null|Console|Window}
Enable progress dialog
road
-road {[0-9,-]*}
Select road numbers to process e.g.
(1,2,3 or 1-3 or 1-100,103-108)
sections
-sections
Show Vejman sections as multilines
select
-select {[0-9]+},{[0-9]+}
Select a specific section for highlighting.
The numbers are from intersection and
to intersection respectively
show
-show
only show the result from the expected
result file
visualizer
-visualizer
Enable progress visualization
{Null|FixedLevel|MultiLevel}[,Level]
The visualizer levels are interpreted as follows
Level
Description
Examples
ALL
All available visualizations
segment grooming
FINEST
Finest available visualizations segment grooming
FINER
Finer visualization
Full set, groomed set
FINE
Fine visualization
Present source section
INFO
Information
Section match result
WARNING Warning situations
Low coverage
SEVERE
Disconnected section matching
Severe problems
The available matchers are
Name
Bnb
Description
Simple branch and bound based covering method
Connected Branch and bound based method requiring connected covers
Route
Branch and bound based method requiring covers that are valid routes
Author:
jbw@hermestraffic.com
All Classes Namespaces Functions Variables Enumerations
Generated by
1.7.1
Download