Image Processing Fundamentals

advertisement
Final Exam Review
CS485/685 Computer Vision
Prof. Bebis
Final Exam
• Final exam will be comprehensive.
–
–
–
–
–
–
–
Midterm Exam material
SIFT
Object recognition
Face recognition using eigenfaces
Camera parameters
Camera calibration
Stereo
SIFT feature computation
• Steps
– Scale space extrema detection (how is it different from
Harris-Laplace? different parameters)
– Keypoint localization (need to know main ideas, no
equations; two thresholds, which ones?)
– Orientation assignment (how are the histograms built?
multiple peaks?)
– Keypoint descriptor (how are the histograms built? partial
voting, main parameters, invariance to illumination changes)
SIFT features
• Properties
–
–
–
–
Scale and rotation invariant
Highly distinctive
Partially invariant to 3D viewpoint and illumination changes
Fast and efficient computation
• Main parameters?
• Matching
– How do we match SIFT features?
– How do we evaluate the performance of a feature matcher?
• Applications
SIFT variations
• PCA SIFT
• SURF
• GLOH
• Need to know key ideas and steps (no need to
remember exact parameter values)
• Similarities/Differences with SIFT
• Strengths/Weakeness
Object Recognition
• Model-based vs category-specific recognition
– Preprocessing & Recognition
• Challenges?
– Photometric effects, scene clutter, changes in shape (e.g.,
non-rigid objects), viewpoint changes
• Requirements?
– Invariance, robustness
• Performance Criteria?
– Efficiency (time + memory), accuracy
Object Recognition (cont’d)
• Representation schemes – advantages/disadvantages
– Object centered (3D/3D or 3D/2D matching)
– Viewer centered (2D/2D matching)
• Matching schemes – advantages/disadvantages
– Geometry-based
– Appearance-based
Object Recognition (cont’d)
• Main steps in matching:
– Hypothesis generation
– Hypothesis verification
• Efficient hypothesis generation
– Which scene features to choose?
– How to organize and search the model database?
Object Recognition Methods
• Alignment
• Pose Clustering
• Geometric Hashing
Main ideas and steps
Object Recognition using SIFT
• Main ideas and steps
– Perform nearest neighbor search
– Find clusters of features (pose clustering)
– Perform verification
• Practical issues
– Approximate nearest neighbors
Bag of Features
• Origins of bag of features method
• Computing Bag of Features
–
–
–
–
Feature extraction
Learn “visual vocabulary” (e.g., K-Means clustering)
Quantize features using “visual vocabulary”.
Represent images by frequencies of “visual words”
(bugs of features)
Bag of Features (cont’d)
• Object categorization using bags of features.
– Represent objects using Bag of Features
– Classification (NN, kNN, SVM)
PCA
• Need to know steps and equations.
• What criterion does PCA minimize?
• How is the “best” low-dimensional space determined
using PCA?
• What is the geometric interpretation of PCA?
• Practical issues (e.g., choosing K, computing error,
standardization)
Using PCA for Face Recognition
• Represent faces using PCA – need to know steps and
practical issues (e.g., AAT vs ATA)
• Face recognition using PCA (i.e., eigenfaces)
– DIFS
• Face detection using PCA
– DFFS
• Limitations
Camera Parameters
• Reference frames – what are they?
–
–
–
–
World
Camera
Image plane
Pixel plane
• Perspective projection
–
–
–
–
Should know how to derive equations
Matrix notation
Properties of perspective projection
Vanishing points, vanishing lines.
Camera Parameters
• Orthographic projection
–
–
–
–
How is related to perspective?
Study equations
Matrix notation
Properties
• Weak perspective projection
–
–
–
–
How is related to perspective?
Study equations
Matrix notation
Properties
Camera Parameters (cont’d)
• Extrinsic camera parameters
– What are they and what is their meaning?
– Study equations
• Intrinsic camera parameters
– What are they and what is their meaning?
– Study equations
• Projection matrix
– What does it represent?
Camera Calibration
• What is the goal of camera calibration and how is it
performed?
• Camera calibration using the projection matrix (study
equations for step 1 only; you should remember how
this method works in general)
• Direct parameter calibration (do not memorize
equations but remember how they work); how is the
orthogonally constraint of the rotation matrix
enforced?
Stereo
• What is the goal of stereo vision?
• Triangulation principle.
• Familiarity with terminology (e.g., baseline, epipolar
plane, epipolar lines, epipoles, disparity)
• Two main problems of stereo (i.e., correspondence +
reconstruction)
• Recover depth from disparity – study proof.
Correspondence Problem
• What is the correspondence problem and why is it
difficult?
• Main methods: intensity-based, feature-based
– How do intensity-based methods work?
– Main parameters of intensity-based methods. How can we
choose them?
– How do feature-based methods work?
– Comparison between intensity-based and feature-based
methods
Epipolar Geometry
• Stereo parameters: extrinsic + intrinsic
• What is the epipolar constraint, why is it important?
• How is epipolar geometry represented?
– Essential matrix
– Fundamental matrix
Essential Matrix
•
•
•
•
What is the essential matrix?
Properties of essential matrix
Study equations
Equation satisfied by corresponding points
Fundamental Matrix
•
•
•
•
What is the fundamental matrix?
Properties of fundamental matrix
Study equations
Equation satisfied by corresponding points
Eight-point algorithm
•
•
•
•
•
What is it useful for?
Study steps
How is the rank(2) constraint enforced?
Normalized eight-point algorithm
Estimate epipoles and epipolar lines using the
fundamental matrix?
Rectification
• What is the purpose of rectification?
• Why is it useful?
• Study steps
Stereo Reconstruction
• Three cases:
– Known extrinsic and intrinsic parameters
– Known intrinsic parameters
– Unknown extrinsic and intrinsic parameters.
• What information could be recovered in each
case?
• What are the main steps of the first two methods?
(do not memorize equations)
Download