CS 485 / 685 Computer Vision Instructor: Mircea Nicolescu Lecture 7 Second Derivative in 2D: Laplacian • The Laplacian: 2 Second Derivative in 2D: Laplacian • The Laplacian can be implemented using the mask: 3 Variations of Laplacian 4 Laplacian - Example • Example: mask: • Example: 5 Properties of Laplacian • It is an isotropic operator. • It is cheaper to implement than the gradient (one mask only). • It does not provide information about edge direction. • It is more sensitive to noise (differentiates twice). 6 Properties of Laplacian • How do we estimate the edge strength? • Four cases of zero-crossings : − − − − {+,-} {+,0,-} {-,+} {-,0,+} • Slope of zero-crossing {a, -b} is |a+b|. • To mark an edge: − compute slope at zero-crossing − apply a threshold to slope 7 Laplacian of Gaussian (LoG) • The Marr-Hildreth edge detector − Uses the Laplacian-of-Gaussian (LoG) − To reduce the noise effect, the image is first smoothed with a lowpass filter. − In the case of the LoG, the low-pass filter is chosen to be a Gaussian. (σ determines the degree of smoothing, mask size increases with σ) 8 Laplacian of Gaussian (LoG) − It can be shown that: 2 [ f ( x, y ) G( x, y )] 2G( x, y ) f ( x, y ) x y 2 2G ( x, y ) 4 2 2 2 e x2 y2 2 2 (inverted LoG) 9 Laplacian of Gaussian (LoG) • Masks: 10 Laplacian of Gaussian (LoG) • Example I * 2G I Zero crossings of I * ( 2G) 11 Separability − Gaussian: − A 2-D Gaussian can be separated into two 1-D Gaussians − Perform 2 convolutions with 1-D Gaussians I ( x, y ) * g ( x, y ) k2 multiplications per pixel I ( x, y) * g ( x)* g ( y) 2k multiplications per pixel g ( x) .011 .13 .6 1 .6 .13 .011 .011 .13 .6 g ( y) 1 .6 .13 .011 12 Separability − Laplacian-of-Gaussian: I * 2 g Requires k2 multiplications per pixel I g yy ( y ) g ( x) I g xx ( x) g ( y ) Requires 4k multiplications per pixel 13 Separability Gaussian Filtering Image g(x) g(y) + Ig + I * 2 g Laplacian-of-Gaussian Filtering gyy(y) g(x) Image gxx(x) g(y) 14 Separability of LoG Steps: 15 Laplacian of Gaussian (LoG) • Marr-Hildteth (LoG) Algorithm: − Compute LoG 2 g ( x, y ) − Use one 2D filter: g ( x), g xx ( x), g ( y ), g yy ( y ) − Use four 1D filters: − Find zero-crossings from each row and column − Find slope of zero-crossings − Apply threshold to slope and mark edges 16 Gradient vs LoG • Gradient vs. LoG – a comparison − Gradient works well when the image contains sharp intensity transitions − Zero-crossings of LoG offer better localization, especially when the edges are not very sharp step edge ramp edge 17 Gradient vs LoG • Disadvantage of LoG edge detection: − Does not handle corners well 18 Gradient vs LoG • Disadvantage of LoG edge detection: − Does not handle corners well − Why? The derivative of the Gaussian: The Laplacian of the Gaussian: (unoriented) 19 Difference of Gaussians (DoG) • The Difference-of-Gaussians (DoG) − Approximates the LoG filter with a filter that is the difference of two differently sized Gaussians – a DoG filter (“Difference of Gaussians”). − The image is first smoothed by convolution with a Gaussian kernel of scale 1 g1 x, y G 1 x, y * f x, y − A second image is obtained by smoothing with a Gaussian kernel of scale 2 g 2 x, y G 2 x, y * f x, y 20 Difference of Gaussians (DoG) − Their difference is: g1 x, y g 2 x, y G 1 x, y * f x, y G 2 x, y * f x, y G 1 G 2 * f x, y DoG * f x, y − The DoG as an operator or convolution kernel is defined as: DoG G 1 G 2 e approximation x2 y2 2 12 e x2 y2 2 22 actual LoG 21 Difference of Gaussians (DoG) σ=1 σ=2 difference 22 Edge Detection Using Directional Derivative • The second directional derivative − This is the second derivative computed in the direction of the gradient. 23 Directional Derivative f • The partial derivatives of f(x,y) will give the slope ∂f/∂x in the positive x direction and the slope ∂f /∂y in the positive y direction. • We can generalize the partial derivatives to calculate the slope in any direction (i.e., directional derivative). 24 Directional Derivative • Directional derivative computes intensity changes in a specified direction. Compute derivative in direction u 25 Directional Derivative (From vector calculus) + Directional derivative is a linear combination of partial derivatives. = 26 Directional Derivative uy ux cos , sin u u ||u||=1 u x cos , u y sin = + cosθ sinθ 27 Higher Order Directional Derivatives f f f ( x, y ) cos sin x y ' 2 2 2 f f f '' 2 2 f ( x, y ) cos 2 cos sin sin 2 2 x xy y f''' ( x, y) 3 f x3 cos 3 3 3 f x 2 y cos sin 3 2 3 f xy cos sin 2 2 3 f y3 sin 3 28 Edge Detection Using Directional Derivative • What direction would you use for edge detection? Direction of gradient: f x cos f sin y 29 Edge Detection Using Directional Derivative • Second directional derivative along gradient direction: 2 2 2 f f f '' 2 2 f ( x, y ) cos 2 cos sin sin 2 2 x xy y f x cos f sin y 30 Properties of Second Directional Derivative 31 Facet Model • Assumes that an image is an array of samples of a continuous function f(x,y). • Reconstructs f(x,y) from sampled pixel values. • Uses directional derivatives which are computed analytically (without using discrete approximations). z=f(x,y) 32 Facet Model • For complex images, f(x,y) could contain extremely high powers of x and y. • Idea: model f(x,y) as a piecewise function. • Approximate each pixel value by fitting a bi-cubic polynomial in a small neighborhood around the pixel (facet). 33 Facet Model Steps (1) Fit a bi-cubic polynomial to a small neighborhood of each pixel (this step provides smoothing too). (2) Compute (analytically) directional derivatives in the direction of gradient. (3) Find points where the second derivative is equal to zero 34 Anisotropic Filtering • Symmetric Gaussian smoothing tends to blur out edges rather aggressively. • An “oriented” smoothing operator (edge-preserving smoothing) would work better: (i) Smooth aggressively perpendicular to the gradient (ii) Smooth little along the gradient • Mathematically formulated using diffusion equation. 35 Anisotropic Filtering – Example result using anisotropic filtering 36 Effect of Scale original – Small σ detects fine features. – Large σ detects large scale edges. 37 Multi-Scale Processing • A formal theory for handling image structures at multiple scales. • Determine which structures (e.g., edges) are most significant by considering the range of scales over which they occur. 38 Multi-Scale Processing σ=1 σ=2 σ=4 σ=8 σ=16 • Interesting scales: scales at which important structures are present • e.g., in the image above, people can be detected at scales 1-4 39 Scale Space • Detect and plot the zerocrossings of a 1D function over a continuum of scales σ. Gaussian filtered signal • Instead of treating zerocrossings at a single scale as a single point, we can now treat them at multiple scales as contours. σ x 40 Scale Space • Properties of scale space (assuming Gaussian smoothing): − Zero-crossings may shift with increasing scale (). − Two zero-crossing may merge with increasing scale. − A contour may not split in two with increasing scale. 41 Multi-Scale Processing 42 Multi-Scale Processing 43 Edge Detection is Just the Beginning… image human segmentation gradient magnitude • Berkeley segmentation database: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ 44 Math Review • • • • • Vectors Matrices SVD Linear systems Geometric transformations 45 n-Dimensional Vector • An n-dimensional vector v is denoted as: • Its transpose vT is denoted as: 46 Vector Normalization • Vector normalization unit length vector || v || x12 x22 ... xn2 Example: 47 Inner (or Dot) Product • Given vT = (x1, x2, . . . , xn) wT = (y1, y2, . . . , yn) their dot product is defined as: and (scalar) or 48 Defining Magnitude Using Dot Product • Magnitude definition: Dot product definition: || v || x12 x22 ... xn2 Therefore: || v ||2 v.v or || v || v.v vT v n 2 x i i 1 49 Geometric Definition of Dot Product θ corresponds to the smaller angle between u and v 50 Geometric Definition of Dot Product The sign of u.v depends on cos(θ) 51 Vector (Cross) Product u w u v w v The cross product is a VECTOR! Magnitude: Orientation: || u |||| v || || w || sin u v u v (v w) v 0 u w u w (v w) w 0 52 Vector Product Computation u v w ( x1 , x2 , x3 ) ( y1 , y2 , y3 ) u i u x1 j x2 k x3 y1 y2 y3 w v ( x2 y3 x3 y2 )i ( x3 y1 x1 y3 ) j ( x1 y2 x2 y1 )k 53