Overview of 3D Scanners

advertisement
Overview of 3D Scanners
Acknowledgement: some content and figures by Brian Curless
3D Data Types
• Point Data
• Volumetric Data
• Surface Data
3D Data Types: Point Data
• “Point clouds”
• Advantage: simplest data type
• Disadvantage: no information on
adjacency / connectivity
3D Data Types: Volumetric Data
• Regularly-spaced grid in (x,y,z): “voxels”
• For each grid cell, store
– Occupancy (binary: occupied / empty)
– Density
– Other properties
• Popular in medical imaging
– CAT scans
– MRI
3D Data Types: Volumetric Data
• Advantages:
– Can “see inside” an object
– Uniform sampling: simpler algorithms
• Disadvantages:
– Lots of data
– Wastes space if only storing a surface
– Most “vision” sensors / algorithms return
point or surface data
3D Data Types: Surface Data
• Polyhedral
– Piecewise planar
– Polygons connected together
– Most popular: “triangle meshes”
• Smooth
– Higher-order (quadratic, cubic, etc.) curves
– Bézier patches, splines, NURBS, subdivision surfaces,
etc.
3D Data Types: Surface Data
• Advantages:
– Usually corresponds to what we see
– Usually returned by vision sensors / algorithms
• Disadvantages:
– How to find “surface” for translucent objects?
– Parameterization often non-uniform
– Non-topology-preserving algorithms difficult
3D Data Types: Surface Data
• Implicit surfaces (cf. parametric)
– Zero set of a 3D function
– Usually regularly sampled (voxel grid)
• Advantage: easy to write algorithms that change
topology
• Disadvantage: wasted space, time
2½-D Data
• Image: stores an intensity / color along
each of a set of regularly-spaced rays in space
• Range image: stores a depth along
each of a set of regularly-spaced rays in space
• Not a complete 3D description: does not
store objects occluded (from some viewpoint)
• View-dependent scene description
2½-D Data
• This is what most sensors / algorithms
really return
• Advantages
– Uniform parameterization
– Adjacency / connectivity information
• Disadvantages
– Does not represent entire object
– View dependent
2½-D Data
• Range images
• Range surfaces
• Depth images
• Depth maps
• Height fields
• 2½-D images
• Surface profiles
• xyz maps
• …
Related Fields
• Computer Vision
– Passive range sensing
– Rarely construct complete, accurate models
– Application: recognition
• Metrology
– Main goal: absolute accuracy
– High precision, provable errors more important than
scanning speed, complete coverage
– Applications: industrial inspection, quality control,
as-built models
Related Fields
• Computer Graphics
– Often want complete model
– Low noise, geometrically consistent model more
important than absolute accuracy
– Application: animated CG characters
Terminology
• Range acquisition, shape acquisition,
rangefinding, range scanning, 3D scanning
• Alignment, registration
• Surface reconstruction, 3D scan merging, scan
integration, surface extraction
• 3D model acquisition
Range Acquisition Taxonomy
Range
acquisition
Contact
Mechanical (CMM, jointed arm)
Inertial (gyroscope, accelerometer)
Ultrasonic trackers
Magnetic trackers
Transmissive
Industrial CT
Ultrasound
MRI
Reflective
Non-optical
Optical
Radar
Sonar
Range Acquisition Taxonomy
Shape from X:
Passive
Optical
methods
stereo
motion
shading
texture
focus
defocus
Active variants of passive methods
Active
Stereo w. projected texture
Active depth from defocus
Photometric stereo
Time of flight
Triangulation
Touch Probes
• Jointed arms with
angular encoders
• Return position,
orientation of tip
Faro Arm – Faro Technologies, Inc.
Optical Range Acquisition Methods
• Advantages:
–
–
–
–
Non-contact
Safe
Usually inexpensive
Usually fast
• Disadvantages:
– Sensitive to transparency
– Confused by specularity and interreflection
– Texture (helps some methods, hurts others)
Stereo
• Find feature in one image, search along epipolar
line in other image for correspondence
Stereo
• Advantages:
–
–
–
–
Passive
Cheap hardware (2 cameras)
Easy to accommodate motion
Intuitive analogue to human vision
• Disadvantages:
–
–
–
–
Only acquire good data at “features”
Sparse, relatively noisy data (correspondence is hard)
Bad around silhouettes
Confused by non-diffuse surfaces
• Variant: multibaseline stereo to reduce ambiguity
Why More Than 2 Views?
• Baseline
– Too short – low accuracy
– Too long – matching becomes hard
Why More Than 2 Views?
• Ambiguity with 2 views
Camera 1
Camera 3
Camera 2
Multibaseline Stereo
[Okutami & Kanade]
Shape from Motion
• “Limiting case” of multibaseline stereo
• Track a feature in a video sequence
• For n frames and f features, have
2nf knowns, 6n+3f unknowns
Shape from Motion
• Advantages:
– Feature tracking easier than correspondence in faraway views
– Mathematically more stable (large baseline)
• Disadvantages:
– Does not accommodate object motion
– Still problems in areas of low texture, in non-diffuse
regions, and around silhouettes
Shape from Shading
• Given: image of surface with known, constant
reflectance under known point light
• Estimate normals, integrate to find surface
• Problem: ambiguity
Shape from Shading
• Advantages:
– Single image
– No correspondences
– Analogue in human vision
• Disadvantages:
– Mathematically unstable
– Can’t have texture
• “Photometric stereo” (active method) more
practical than passive version
Shape from Texture
• Mathematically similar to shape from shading, but uses
stretch and shrink of a (regular) texture
Shape from Texture
• Analogue to human vision
• Same disadvantages as shape from shading
Shape from Focus and Defocus
• Shape from focus: at which focus setting is a
given image region sharpest?
• Shape from defocus: how out-of-focus is each
image region?
• Passive versions rarely used
• Active depth from defocus can be
made practical
Active Optical Methods
• Advantages:
– Usually can get dense data
– Usually much more robust and accurate than passive
techniques
• Disadvantages:
– Introduces light into scene (distracting, etc.)
– Not motivated by human vision
Active Variants of Passive Techniques
• Regular stereo with projected texture
– Provides features for correspondence
• Active depth from defocus
– Known pattern helps to estimate defocus
• Photometric stereo
– Shape from shading with multiple known lights
Pulsed Time of Flight
• Basic idea: send out pulse of light (usually laser),
time how long it takes to return
1
d  ct
2
Pulsed Time of Flight
• Advantages:
– Large working volume (up to 100 m.)
• Disadvantages:
– Not-so-great accuracy (at best ~5 mm.)
• Requires getting timing to ~30 picoseconds
• Does not scale with working volume
• Often used for scanning buildings, rooms,
archeological sites, etc.
AM Modulation Time of Flight
• Modulate a laser at frequencym , it returns with
a phase shift 
1  c    2n 
d   

2  νm  2

• Note the ambiguity in the measured phase!
 Range ambiguity of 1/2mn
AM Modulation Time of Flight
• Accuracy / working volume tradeoff
(e.g., noise ~ 1/500 working volume)
• In practice, often used for room-sized
environments (cheaper, more accurate than
pulsed time of flight)
Triangulation
Triangulation: Moving the
Camera and Illumination
• Moving independently leads to problems with
focus, resolution
• Most scanners mount camera and light source
rigidly, move them as a unit
Triangulation: Moving the
Camera and Illumination
Triangulation: Moving the
Camera and Illumination
Triangulation: Extending to 3D
• Possibility #1: add another mirror (flying spot)
• Possibility #2: project a stripe, not a dot
Object
Laser
Camera
Triangulation Scanner Issues
• Accuracy proportional to working volume
(typical is ~1000:1)
• Scales down to small working volume
(e.g. 5 cm. working volume, 50 m. accuracy)
• Does not scale up (baseline too large…)
• Two-line-of-sight problem (shadowing from either
camera or laser)
• Triangulation angle: non-uniform resolution if too small,
shadowing if too big (useful range: 15-30)
Triangulation Scanner Issues
• Material properties (dark, specular)
• Subsurface scattering
• Laser speckle
• Edge curl
• Texture embossing
Multi-Stripe Triangulation
• To go faster, project multiple stripes
• But which stripe is which?
• Answer #1: assume surface continuity
Multi-Stripe Triangulation
• To go faster, project multiple stripes
• But which stripe is which?
• Answer #2: colored stripes (or dots)
Multi-Stripe Triangulation
• To go faster, project multiple stripes
• But which stripe is which?
• Answer #3: time-coded stripes
Time-Coded Light Patterns
• Assign each stripe a unique illumination code
over time [Posdamer 82]
Time
Space
Download