Slide 7

advertisement
COSC 6114
Prof. Andy Mirzaian
Planar Point Location:
Knowing where you are on the map
References:
•
•
•
•
[M. de Berge et al] chapter 6
[O’Rourke’98] chapter 7.6
[Edelsbrunner ’87] chapter 11
[Preparata-Shamos’85] chapter 2.2
Applications:
•
•
•
•
•
GIS: Geographic Information Systems
Computer Graphics
Mobile Telecommunication
Mobile Robotics
…
Point Location in a Planar Subdivision
PSLG = Planar Straight-Line Graph
q
PSLG
Locate a query point q in the PSLG: find which face of the PSLG contains q.
Complexity Measures:
• S - space to store the point location data structure
• T - preprocessing time to construct the data structure
• Q - query time to locate the PSLG face that contains the query point.
Point Location in a Planar Subdivision

1D Optimal method: sorted array, S = O(n), T = O(n log n), Q = O(log n).

2D: Shamos [1975]: Slab Method: S = O(n2), T = O(n2), Q = O(log n).

2D Optimal Method: S = O(n), T = O(n log n), Q = O(log n).


Mulmuly [1990], Seidel [1991]: Randomized Incremental Method.

Kirkpatrick [1983]: Triangulation Refinement Method.

Edelsbrunner-Guibas-Stolfi [1986] SIAM J. Computing, pp:317-340.

Sarnak-Tarjan [1986], “Planar point location using persistent search trees,”
Communications of ACM 29, pp: 669-679.

Lipton-Tarjan [1977-79]: Planar Separator Method.
2D Line Segments intersections:
Randomized Incremental Method in O( K + n log n) expected time.
The Slab Method
 O(n2)
 O(n2)
 O(log n)
space
preprocessing time
query time.
A given PSLG with n vertices ( # edges  3n-6). We may add a large bounding box.
y
x
The Slab Method
 O(n2)
 O(n2)
 O(log n)
space
preprocessing time
query time.
A given PSLG with n vertices ( # edges  3n-6). We may add a large bounding box.
y
q
x
a vertical slab
Query Answering:
• do binary search among slabs (in x-sorted order).
• do binary search vertically within the located slab.
• each binary search takes Q = O(log n) time.
Preprocessing for the Slab Method
The Plane Sweep Method:
 Event schedule: x-coordinate of PSLG vertices in increasing order.
Maintain these in a priority queue Q.
 Event Status: vertical sorted ordering of sub-regions within the current slab.
Maintain this in a dictionary D.
 Create a sorted array of slabs. Every time a slab is completed, dump a copy of
the current D in the next entry of the sorted array of slabs.
[This will be the final data structure.]
 Analysis:
 Event processing takes O(log n) time on Q, O(ev log n) time on D, and
O(n) time to dump a copy of D into the permanent D.S. Here ev is the
number of edges incident to the current event vertex v.
 Total Preprocessing Time T = O(n log n +v ev log n + nn)
= O(n log n + n log n + n2) = O(n2).
 Space = O(n2).
Randomized Incremental Method
Construct the Trapezoidal decomposition not by the sweep method but by a
randomized incremental method. This at the same time constructs the
query search structures and also has optimal expected performance.
y
x
Randomized Incremental Method
Defining features of a trapezoid :
 is defined by up to 4 line segments left(), right(), top(), bottom().
(These are some edges of the PSLG, possibly not all distinct.)

sides


right() is defined symmetrically.


Randomized Incremental Method
CLAIM: If PSLG has n line segments, then # trapezoids  3n + 1.
Proof: Assume 2n end-points are in general position.
1
Each end-point defines left/right wall of at most 3 trapezoids.
Except the leftmost & rightmost trapezoids, each trapezoid is
2
defined by 2 vertical walls (incident to 2 end-points).
 2(# trapezoids) – 2 = 3 (# end-points) = 6n.
 # trapezoids = 3n+1.
If end-points are not in general position (i.e., some have equal x-coordinates,
or coincide), then the count is even less. [Could use Euler’s formula too.]
Trapezoidal Map T (S)
O(n) space
of a set S of n non-crossing line segments can be represented by the
adjacency structure of its trapezoids.
Adjacency: 1 and 2 are adjacent iff they share (portion of) a vertical wall.
A  has at most 2 left neighbors and at most 2 right neighbors.
1
2

3
4
3
D(S): The Query Search Structure
 It’s a rooted DAG, each node has out-degree at most 2.
 Leaves (i.e., nodes of out-degree 0) store trapezoids with 2-way cross-pointers
with their counter-parts in T (S).
 Internal nodes are either endpoints with x-value as key (left/right comparison),
or a line-segment of S (below/above comparison).
T
D
S = { s1 , s2 }
R
R
insert s1
bounding box
q1
C
s1
A
s1
q1
R p1 p F H
2 s
2
A E
G
p2
I
q2
q2
s1
B
C
s1
q1
D
A
A
p1
q1
R p1
insert s2
p1
C
G
s2
I
H
s2
E
B
F
D
C
Randomized Incremental Construction of T (S) &D(S)
Input:
a set S of n non-crossing line-segments in the plane.
Output: T (S) & D(S).
1.
Get a bounding box and initialize T () & D().
2.
Randomly permute S into (s1 , s2, … , sn).
3.
for k  1..n do
(* insert sk & update T (Sk) & D(Sk). Sk = {s1 , s2, … , sk} *)
Let pk & qk be left & right ends of sk, respectively
0  Search (pk , D) ; j  0
while qk is to the right of right(j) do
if sk is below right(j)
then j+1  lower-right-neighbor of j
else j+1  upper-right-neighbor of j
j  j+1
end-while
0 , 1, … , j are trapezoids intersected by sk.
Update T (Sk) & D(Sk) accordingly (see next slide).
end
0
pk
1
2
sk
3
qk
sk
Example of step 3
T (Sk)
T (Sk-1)
C

pk
sk
D
A
qk
B
D(Sk)
D(Sk-1)
D(Sk-1)
pk

qk
A
sk
B
D
C
Example of step 3
T (Sk-1)
T (Sk)
1
0
pk
2
3
A
qk
sk
E
C
pk
B
D
F G
qk
sk
D(Sk)
D(Sk-1)
D(Sk-1)
pk
0
1
2
3
sk
sk
sk
A
B
qk
E
C
D
sk
G
F
Complexities
THEOREM: Randomized Incremental algorithm constructs trapezoidal map T (S) &
search structure D(S) for a set S of n non-crossing line-segments with complexities:
1)
O(log n) expected query time for any query point q.
2)
O(n) expected size of the search structure.
3)
O(n log n) expected construction time.
[All these expectations are on the random ordering of the segments in S.]
Proof of (1):
Height of D(S)  3n, since each iteration can increase height by  3.
So, worst-case query time is O(n).
“Fix” q and S. Define the random variable
Xk  # nodes added on the search path of q in D(S) during iteration k, 1k n.
Expected query time for q = E[k Xk] = k E[Xk].
E[Xk]  3 Pk where Pk = prob. that there is a node added to that path in iteration k.
q(Sk)  the trapezoid in T (Sk) that contains q.
Pk = Pr[q(Sk)  q(Sk-1)] = Pr[q(Sk)  T (Sk-1)]  4/k .
[Backwards analysis: “Fix” Sk  S, Any segment in Sk has prob. 1/k of being sk.
T (Sk) is fixed and does not depend on the choice of sk, but T (Sk-1) does.
Pr[q(Sk)  T (Sk-1)] = Prob. that sk defines bottom, top, left, or right of q(Sk).
Each has prob.  1/k . Since 4/k doesn’t depend on the choice of sk, we can lift
the “fix” condition.]
Expected query time = k E[Xk]  k 3 Pk  k 12/k  12 Hn = O(log n).
This too doesn’t depend on q or S. Lift the “fix” condition.
Complexities
THEOREM: Randomized Incremental algorithm constructs trapezoidal map T (S) &
search structure D(S) for a set S of n non-crossing line-segments with complexities:
1)
O(log n) expected query time for any query point q.
2)
O(n) expected size of the search structure.
3)
O(n log n) expected construction time.
[All these expectations are on the random ordering of the segments in S.]
Proof of (2):
|T (S)| = O(n).
k  # new trapezoids created in iteration k.
|D(S)| = O(n) leaves + k ( # internal nodes created in iteration k).
= O(n)
+ k O(k - 1) = O(n+ k k) = O(n2) worst-case.
E[ |D(S)| ] = O( n + E [ k (k – 1) ] ) = O( n + k E[k ] ).
Backwards analysis: Temporarily fix Sk  S :
def 1
if  disappearsfromT (Sk ) when s is removedfromSk 
δ(, s )  
   T (Sk ), s  Sk
0
otherwise


  δ(, s)  4 |T (Sk ) |  4(3 k  1)  O(k)
sSk
T (Sk )
E[k ] 
1
k
 
sSk
δ(, s) 
T (Sk )
 E[| D (S) |]  O ( n 
O(k)
k
 E[ ] )
k 1.. n
k
 O (1)
 O (n).
Complexities
THEOREM: Randomized Incremental algorithm constructs trapezoidal map T (S) &
search structure D(S) for a set S of n non-crossing line-segments with complexities:
1)
O(log n) expected query time for any query point q.
2)
O(n) expected size of the search structure.
3)
O(n log n) expected construction time.
[All these expectations are on the random ordering of the segments in S.]
Proof of (3):
O(1) + k [ O(log k) + O( E[k ] ) ] = O(k [ O(log k) + O(1)]) = O(n log n).
to search in D(Sk-1)
for the left end point
of segment sk.
to replace the affected
trapezoids.
This completes the proof of the Theorem.
Complexities
COROLLARY: S = Planar Straight-Line Subdivision with n edges.
Planar point location data structure:
1)
2)
3)
O(log n) expected query time for any query point q.
O(n) expected size of the search structure.
O(n log n) expected construction time.
Proof:
Represent S by a DCEL & construct T (S) and D(S) for edges of S.
Have each trapezoid  T (S) point to the face in DCEL(S) just below top().
Dealing with Degeneracy
What if more than one end-point in S has the same x-coordinate?
How about vertical line-segments in S? …
y
Shear Transform:
x
 x  εy 

φ :    
 y
 y 
1
1
0
0
0
1 x
e
0
1
Conceptually assume e > 0 is sufficiently small.
For a point p = (x,y) assume (x,y) is representing p = (x+ey , y).
Properties:
1. No two end-points p & q of S have the same (transformed) x-coordinate.
2.
Preserves left/right relationships: p left of q  p left of q.
3.
Preserves point-line incidence (it is affine transformation):
point p above segment s  p above segment s.
[Also holds with above replaced by on or below.]
Some Facts on Probability
1.
Markov Inequality (MI):
Consider a random variable Z  0, and a fixed parameter >0. Then
Pr[Z  α] 
Proof: E[Z] =
2.
z
0
E[ Z]
α
z Pr[Z=z]  z  z Pr[Z=z]  z  Pr[Z=z] =  Pr[Z].
Let Z = Z1 + Z2 + … + Zn , all random variables. Then, by linearity of expectation:
E[Z] = E[Z1] + E[Z2] + … + E[Zn].
3.
Let Z = Z1  Z2  …  Zn , all random variables. Furthermore, assume that
all Zk’s are Independent Random Variables (IRV). Then we have
E[Z] = E[Z1]  E[Z2]  …  E[Zn].
A Permutation Graph G
Let S = { s1, s2, … , sn} be a randomly permuted set of n segments in the plane.
Consider DAG (Directed Acyclic Graph) G of subsets of S (see figure below).






n+1 layers 0,1,…,n. Each node at layer k has in-degree k and out-degree n-k.
Each source-to-sink path is in one-to-one correspondence with a permutation of S.
Each edge from subset S’ to S’  {sk} is labeled by the segment sk.
Consider a fixed query point q.
Mark every edge of G if its removal affects q (The trapezoid in T (S) that contains q).
Up to 4 incoming edges to each node in G is marked.
source
 Fix a source-to-sink path P.
{1}
 Xk, k=1..n, defined below, are
Independent Random Variables.
1 if k-th edge on P is marked
Xk  
0 otherwise
{1,2}
{1,3}
{1,2,3}

{2}
{1,4}
{1,2,4}
sink
{3}
{2,3}
{2,4}
{1,3,4}
{1,2,3,4}
{4}
{3,4}
{2,3,4}
A Tail Estimate
We showed expected query time is O(log n). Now we show the probability that the
query time for a query point to be much more than O(log n) is very small. That is, the
prob. that height of D(S) is much more than O(log n) is very small.
LEMMA 1: Consider a parameter  > 0. Then
q Pr[search pathlengthfor q  3λ ln(n  1)] 
1

λ ln 1.25 - 1
(n  1)
Proof of Lemma 1
Fix a query point q and a (permutation) source-to-sink path P in G.
Consider the 0/1 IRVs Xk, k=1..n.
Length of the search-path for q in D(S)  3 Y,
Pr[Y  λ ln(n  1)]  Pr[e

E[ e tX k
 e
tλ ln( n 1)
]
k=1..n Xk.
E[ e tY ]
E[ e tY ]
]  tλ ln( n 1) 
e
( n  1) tλ


tX k
 E  e    E[ e tX k ] .
 k
 IRV
k
]  e t  Pr[X k 1]  e 0  Pr[X k 0 ]
 et  4  e0  1  4
k
k
 (1.25)  4  1  4  1  k
( take t  ln 1.25)
k
k
k
tX
E [ e ]  E e k k
tY
tY
where Y =
( )
( )
Pr[Y  λ ln(n  1)] 
1
( n  1) λ ln 1.25
 1 k k
n
k 1

1
( n  1) λ ln 1.25 1
Since this does not depend on selection of P or q, the Lemma follows.
A Tail Estimate (continued)
LEMMA 2:
Pr[height of D(S)  3λ ln(n  1)] 
2

λ ln 1.25 - 3
(n  1)
Proof: Slab-Partition of S is a more refined partition than T (S) for any permutation of S.
Any 2 query points in the same trapezoid of the Slab-Partition are equivalent w.r.t. T (S) & D(S),
since they follow the same search path.
Slab-Partition has  2(n+1)2 trapezoids. Therefore,
query pathfor some

2(n  1)2
Pr
 3λ ln(n  1) 

λ ln 1.25 - 1
(n  1)
  of Slab-Partition

Slab
Partition
A Tail Estimate (continued)
LEMMA 2:
Pr[height of D(S)  3λ ln(n  1)] 
2

λ ln 1.25 - 3
(n  1)
Example: For n > 4, take =20  Pr[ height D(S)  3 ln (n+1) ]  1/4
 Pr[ height D(S)  3 ln (n+1) ]  1 - ¼ = 3/4 .
This is worst-case expected query time.
Similar results for, space and construction time:
 Pr[ Query Time & Const. Time & Space are good]  1- ( ¼ + ¼ + ¼ ) = 1/4.
THEOREM: One can construct a point-location query data structure with
 O(n log n) expected construction time, &
 O(n)
worst-case space, &
 O(log n)
worst-case query time.
Proof:
Run (the truncated version of) the randomized incremental algorithm
until it succeeds. Expected number of runs  4.
Triangulation Refinement Method
MAIN IDEAS:
1. Starting from initial triangulation of the PSLG, iteratively find coarser
triangulations, by removing fractions of independent vertices of low degree,
then re-triangulate.
2.
Maintain the history DAG of the overlaps which becomes the search structure.
3.
For point location, start from the coarsest subdivision and walk through the
history DAG to more refined triangulations, until down to the complete
triangulation of the PSLG.
APPLICATIONS:
1. [O’Ro’98, section 7.6]: Planar Point Location,
2. [O’Ro98, section 7.5] Given a 3D convex polytope:
a) Is a given query point inside P?
b) Find extreme vertex of P in a given direction.
3. …
PSLG Triangulation
Worst-case Optimal: O(n log n) time and O(n) space.
y
x
PSLG Triangulation
Worst-case Optimal: O(n log n) time and O(n) space.
Step 1: Trapezoidal Decomposition [O(n log n) time, O(n) space]
y
x
PSLG Triangulation
Worst-case Optimal: O(n log n) time and O(n) space.
Step 1: Trapezoidal Decomposition [O(n log n) time, O(n) space]
Step 2: Add supporting diagonal of trapezoids [O(n) time & space]
y
x
PSLG Triangulation
Worst-case Optimal: O(n log n) time and O(n) space.
Step 1: Trapezoidal Decomposition [O(n log n) time, O(n) space]
Step 2: Add supporting diagonal of trapezoids [O(n) time & space]
Step 3: Remove vertical visibility chords
(now each region is x-monotone) [O(n) time&space]
y
x
PSLG Triangulation
Worst-case Optimal: O(n log n) time and O(n) space.
Step 1: Trapezoidal Decomposition [O(n log n) time, O(n) space]
Step 2: Add supporting diagonal of trapezoids [O(n) time & space]
Step 3: Remove vertical visibility chords
(now each region is x-monotone) [O(n) time&space]
Step 4: Triangulate each x-monotone region
(including introduction of convex-hull edges) [O(n) time & space]
y
x
Triangulation Refinement Algorithm
Input: Triangulated PSLG G=(V,E) with possibly unbounded edges. |V|=n, |E|3n-6.
Complexity: O(n) time & space
G
Triangulation Refinement Algorithm
Input: Triangulated PSLG G=(V,E) with possibly unbounded edges. |V|=n, |E|3n-6.
Complexity: O(n) time & space
Step 1: Inscribe vertices of G in a bounding triangle T=(W1, W2, W3) (sufficiently large).
W3
T
G
W2
W1
Triangulation Refinement Algorithm
Input: Triangulated PSLG G=(V,E) with possibly unbounded edges. |V|=n, |E|3n-6.
Complexity: O(n) time & space
Step 1: Inscribe vertices of G in a bounding triangle T=(W1, W2, W3) (sufficiently large).
Step 2: New vertices = intersection of unbounded edges of G with T. Re-triangulate.
m = # vertices  |V|+|T|+|E|  n + 3 + 3n-6 = O(n). # edges = O(m) = O(n).
W3
T
G
W2
W1
Triangulation Refinement Algorithm
Input: Triangulated PSLG G=(V,E) with possibly unbounded edges. |V|=n, |E|3n-6.
Complexity: O(n) time & space
Step 1: Inscribe vertices of G in a bounding triangle T=(W1, W2, W3) (sufficiently large).
Step 2: New vertices = intersection of unbounded edges of G with T. Re-triangulate.
m = # vertices  |V|+|T|+|E|  n + 3 + 3n-6 = O(n). # edges = O(m) = O(n).
Step 3: Generate the triangulation refinement sequence
(T1 , T2 , … , Th(n) ), where h(n) = O(log n). [see next slides]
W3
T
G
W2
W1
Step 3: Triangulation Refinement Idea
W2
W3
T1
W2
11
1
4
12
W3
T3
14
19
9
13
3 10
15
17
7
5
W2
11
8
2
W3
T2
18
5
6
16
W1
W1
W1
W2 T4
20
1
2
12
3
4
5
6
13
7
14
17
18
20
8
15
9
16
19
10
11
W1
Hierarchical (history) DAG
O(n) nodes
node out-degrees = O(1)
Depth = O(log n)
W3
Step 3: Triangulation Refinement Algorithm
Step 3: Sequence (T1 , T2 , … , Th(n) ) of coarser-&-coarser triangulations,
h(n) = O(log n).
v(Tk) = # vertices of Tk.
 constant 0 <  < 1 s.t. v(Tk+1)   v(Tk)  h(n) =O(log1/ n) = O(log n).
How to get from Tk to Tk+1 :
 Tk:
S {degree(v) |
v Tk } = 2  # edges of Tk  6 v(Tk) –12
 average node degree < 6
 in Tk  at least v(Tk)/2 vertices of degree  2  average < 12
 D12  vertices of degree  12 in Tk (excluding W1, W2, W3)
Q 
while D12   do
v  a vertex in D12
add v to Q
remove v and all its adjacent vertices from D12 (at most 13 including v)




|Q|  | D12 | / 13  |Q|  |v(Tk)| / 26
Q is an independent set of vertices in Tk (i.e., no two are adjacent).
Tk+1  Tk+1 - Q (and re-triangulate it , see next slide)
v(Tk+1)   v(Tk) where  = 25/26.
Step 3: Triangulation Refinement Algorithm
v(Tk+1)   v(Tk) where  = 25/26
v(T1) = O(n)
 v(Tk) = O(n k)
Total # vertices in T1 , T2 , … , Th(n)
= O(n + n  + n 2 + n 3 + … )
= O(n)
v
remove v
& re-triangulate
the induced hole
A star polygon can be re-triangulated in linear time.
Triangulation Refinement Algorithm
Query Answering:
O(log n) time
1.
locate the query point q in a face (triangle) of Th(n).
2.
for i  h(n) – 1 downto 1 do
using the history DAG go from face of Tk+1 to face of Tk containing q
3.
report located face of T1 that contains q.
Edelsbrunner et al. Algorithm
Planar Point Location for Monotone Subdivisions
•
O(n) Preprocessing Time
•
O(n) Storage Space
•
O(log n) query Time
R10
R11
R5
R9
R4
y
R3
x
R2
R8
R7
R1
R6
R0
A
B
B  A (B below A)
partialorder    linear order
Edelsbrunner et al. Algorithm
R10
R11
R5
R4
R9
R3
R2
R8
R7
R1
Separator s8
R6
R0
R0  s1  R1  s2    
 sn 1  Rn 1
sn
Chain Tree: Sk’s internal nodes, Rk’s leaves
2
sn
4
s 3n
4
Edelsbrunner et al. Algorithm
Naïve implementation:
• O(n2) time
• O(n2) space
[lot of overlap between chains]
2
• O(log n) query time [O(log n) levels, O(log n) at each node on search path]
Improvement (see chapter 11 of [Ede87]):
• O(n) time
[for monotone subdivision]
• O(n) space
• O(log n) query time
Ideas used:
• Layered DAG
[Edelsbrunner, Guibas, Stolfi, SICOMP’86, pp:317-340]
• Fractional Cascading [Chazelle, Guibas, Algorithmica’86 ]
• Filtering Search
[Chazelle, SICOMP’86, pp:703-724.]
Line Segments Intersections:
Randomized Incremental Method
Input: A set S of n line segments in the plane.
Output: Report all pairs of intersecting segments in S
in expected O(K + n log n) time, where K = # of intersecting pairs in S.
Solution:
1. Bentley-Ottman’s deterministic algorithm takes O(K log n + n log n) time.
So, that’s not the right answer.
2. Construct the trapezoidal decomposition by a randomized incremental method.
3. No History DAG used for point location.
Instead, start the initial trapezoidal decomposition by inserting the 2n end-points of
the line segments. We can obtain this initial structure by sorting in O(n log n) time.
4. Suppose R is the random subset of r segments out of the n input segments we
have inserted so far. Let s be the latest segment added to R’ = R - {s} to get R.
s
5. Insert s into R’: start from the trapezoid containing the leftmost end of s and
thread s left-to-right through the current trapezoidal decomposition.
- Which wall of the current trapezoid will you exit from?
- What is the next trapezoid you will enter?
- Easy if you enter through a vertical wall.
The entering face has at most 6 edges to be checked.
- What if you go through a segment intersection?
You may need to check more than O(1) sub-division edges.
- What can you do?
Answer: Use the planar sub-division shown in the right figure.
6. Insertion cost of segment s into R’ to get R = R’{s} is
cost(s, R) = O( total size of the red shaded zero-width faces).
7. Expected cost of this insertion over the random choice of s (looking backwards) is
1
r
3
cost(
s
,
R
)


r
s R
3 O( n  K )
size
(
f
)


R
r
f
where the 2nd sum is over all zero-width faces f of the current planar sub-division,
and KR is the number of intersecting pairs of segments in R.
K
7. So, expected insertion cost for R = R’  {s} is O ( n  R )
r
r
8. R itself is a random subset of r input segments. What is the expected value of KR?
9. Let {a,b} be an intersecting pair of segments among the n input segments.
The probability that {a,b} will show up in R is  r 
2
   r ( r  1)
n ( n  1)
 n 
2
r ( r  1)
K
10. So, expected value of KR is
n ( n  1)
This depends only on the input and the iteration number r, but is independent of R.
11. So, expected cost of the rth iteration is
n

r 1
O 
K 
 r n(n  1) 
12. Total expected cost over the n iterations is
n

r 1
O 
K   O(nHn  K )  O( K  n log n).

r 1
 r n(n  1) 
n
Exercises
1.
Show that query point location can be done in linear time on a planar subdivision without
preprocessing. That is, given DCEL of a PSLG S of size n and a query point q, the face of
S containing q can be found in O(n) time.
[ Note: we don’t have a point location data structure.]
2.
Given a set P of n points in the plane, we define a subdivision of the plane into rectangular
regions by the following rule. We assume that all the points are contained within a
bounding rectangle. Imagine that the points are sorted in increasing order of y-coordinate.
For each point in this order, shoot a bullet to the left, to the right and up until it hits an
existing segment, and then add these three bullet-path segments to the subdivision.
(See the figure below left for an example.)
(a) Show that the resulting subdivision has size O(n) (including vertices, edges, and faces).
(b) Describe an algorithm to add a new point to the subdivision and restore the proper
subdivision structure. Note that the new point may have an arbitrary y-coordinate,
but the subdivision must be updated as if the points were inserted in increasing order of
y-coordinate. (See the figure above center and right.)
(c) Prove that if the points are added in random order, then the expected number of
structural changes to the subdivision with each insertion is O(1).
3.
Shear Transform: Design an O(n log n) time algorithm for the following problem:
Given a set P of n points in the plane, determine a real value of e > 0 such that the
shear transform f: (x,y)  (x+ey, y) does not change the order (in x-direction) of
points in P with unequal x-coordinates.
4.
Prove that the number of internal nodes of the search structure D of the
randomized incremental algorithm TRAPZOIDALMAP increases by ki – 1 in iteration
i, where ki is the number of new trapezoids in T(Si) (and hence the number of new
leaves of D).
5.
Vertical ray shooting: Preprocess a given set S of n non-crossing line segments
in the plane, so that for any given upward vertical ray r, we can quickly determine
the first line segment in S (if any) that is hit by r. Derive bounds on the
preprocessing time, storage space and query time.
END
Download