Obstacle Free Path Determination  Using a Stereo Vision System Marty Otzenberger EGGN‐517

advertisement
Obstacle Free Path Determination Using a Stereo Vision System
Marty Otzenberger
EGGN‐517
May 3, 2012
Outline
•
•
•
•
•
•
•
•
•
•
•
•
•
Path Planning
Stereo Vision
Disparity Maps
Image Plane Analysis and Shortcuts
My Algorithm
Test Images
Results
Statistics
Conclusions
Problems/Limits
Future Work/Improvements
References
Questions
Path Planning
• Using sensors, such as stereo cameras, to determine where an autonomous vehicle can travel without hitting an obstacle.
• Common sensors used include [2][3]…
– Stereo Cameras
– LIDAR
– IR
– Ultrasonic
[1]
Stereo Vision
• Using 2 or more cameras to get 3D information about a scene.
• Data usually used in one of two ways.
– Occupancy Maps
– Image Space Disparity
Analysis
• Occupancy Maps add an additional layer of complexity to the problem, but multiple algorithms exist to follow a map[5].
[4]
Disparity Maps
• Disparity is the distance between the location of an object in the left and right images.
• A disparity map is a grayscale image representing the distance from the camera to the object in the image.
• Many possible methods to calculate a disparity map[8].
[6]
[6]
Image Plane Analysis
• For an obstacle free path, the Z distance should increase linearly from the bottom of the image plane up[1].
• Any obstacles will show up as discrepancies in this linearity.
• Can also use a comparison to a reference map of the ground plane[7].
[1]
[1]
Possible Algorithm Shortcut
• For obstacle avoidance, a dense 3D depth image is less important [1].
– Most obstacles will be of a reasonably large size
– Can greatly reduce the necessary computational power
• I only processed every 5th column of the image, which may have been more than absolutely necessary.
My Algorithm
• Camera Calibration
– Lens Distortion
– Stereo Alignment
• Disparity Maps
– Map Augmentation
• Linear Depth
• Results
Camera Calibration
• Used a simple stereo camera
• After calibrating each lens for distortion, used the Stereo Calibration toolbox to align the epipolar lines.
• Used a set of 9 calibration images.
Image Preparation
• Camera stores 4x3 ratio images on one 4x3 image plane.
• Had to split and rescale the images to get a good calibration.
Stereo Calibration
• Stereo lenses slightly cross‐eyed, so the Stereo calibration was necessary to reduce this effect.
• Final calibration showed a reasonable amount of lens distortion, and the mentioned nonparallel nature of the lenses.
Test Images
• Had 27 test images, of which I am using 16.
– Others dropped due to large regions of saturation which did not work with disparity algorithm.
Disparity Maps
• Used Disparity map generation code from class.
• Lots of tweaking the parameters to get a decent quality disparity map.
• Maps took 5‐10 minutes per image to generate.
• Had to do lots of adjustments to balance mismatches and lack of matches.
– Would get maps with either large amounts of noise, or large portions with no disparity data.
• Had to reverse left and right images to get positive disparities.
– Not sure if the code was to blame, or if I miss classified my images.
• Also had higher disparity for closer images
– Again, possibly code, or possibly from calibration.
Disparity Maps Cont.
• Ended up spending most of my time getting the disparity maps to turn out reasonably well.
• Some are still lower quality than I had hoped for.
• Settled on a cross correlation score of 0.6, and a search window of ±2 by ±55.
• Performed a correlation match for every 5th
column, and every 2nd row.
These Maps are a Gaussian Filtered Version
Image Filtering
• First sub‐sampled the images to remove zeros.
• Tried using a Gaussian filter and an averaging filter on the sub‐sampled version of the image to reduce outliers for path calculations.
• This seemed to make the outliers denser, but did not remove them well.
– Decreased the magnitude of the outliers, but increased the number of outliers.
• Settled on no Filtering, just sub‐sampling.
No Filter
Gaussian Filter
Averaging Filter
Linear Depth
• Set restriction on minimum r2 value for a line of best fit calculation.
– r > 0.85
• Ignored the bottom and top 10 pixels to reduce noise problems.
– This seemed to be the location of most of my noise.
• Worked my way up the image in steps of 10 pixels, checking the linearity from the bottom to that point each time.
Linear Depth Cont.
• Having some problems with paths which have a large section of zeros at the bottom from bad matching.
Results
Note: Edges not selected due to cross‐correlation boundaries.
Results Cont.
Statistics
• Hard Stats are hard to determine as it is hard to quantify a pass or fail.
• Some numbers…
– Total Images Taken: 27
– Total Images Processed Fully: 16
– Large False Positives: 4
– Large False Negatives: 5
– No Path Returned at All: 5
– Reasonable Result: 5
Statistics Cont.
• In general:
– 60% of images successfully processed without error.
• Of those 60%...
– 31.25% of images had acceptable results.
– 31.25% of images returned no useable data.
– 37.5% of images had marginal success.
Conclusions
• Algorithm is very dependent on the scene, lighting, and obstacle size/location.
• Many things that broke algorithm would be impossible to avoid in a real‐life situation.
– Glare.
– Low texture background.
– Small or Poorly located obstacles.
• Bottom Line:
– Not a reliable algorithm to depend on.
Problems/Limits
• Regions of Saturation break disparity map algorithm.
• High number of outliers in the disparity map
• Voids in the disparity map cause false negatives on the free path calculation.
• Algorithm is very slow!
– Approximately 10 minutes per image.
• Poor image content can lead to null results.
Future Work/Improvements
• Replace disparity map algorithm with a more robust, efficient, and lower noise algorithm[8].
– Algorithms exist to determine pixel‐to‐pixel discontinuities and reduce noise[9].
• Improve linear detection for paths with zeros at the bottom of the image.
• Make process more efficient to reduce computation time.
• Could also possibly impose a limit on the slope to better identify the free path.
References
[1] N. Ortigosa, S. Morillas, and G. Peris‐Fajarnés, “Obstacle‐Free Pathway Detection by Means of Depth Maps,” Journal of Intelligent & Robotic Systems, vol. 63, no. 1, pp. 115‐129, 2011.
[2] S. Yoon, K.S. Roh, and Y. Shim, “Vision‐Based Obstacle Detection and Avoidance: Application to Robust Indoor Navigation of Mobile Robots,” Advanced Robotics, vol. 22, no. 4, pp. 477‐492, 2007.
[3] G.N. DeSouza, A.C. Kak, “Vision for Mobile Robot Navigation: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237‐267, 2002.
[4] D. Murray and J.J. Little, “Using Real‐Time Stereo Vision for Mobile Robot Navigation,” Autonomous Robotics, vol. 8, no. 2, pp. 161‐171, 2000.
[5] M.W. Otte, S.G. Richardson, J. Mulligan, and G. Grudic, “Path Planning in Image Space for Autonomous Robot Navigation in Unstructured Environments,” Journal of Field Robotics, vol. 26, no. 2, pp. 212‐240, 2009.
[6] W. Hoff, “Stereo Vision,” EGGN 512, Colorado School of Mines, Engineering Division, pp. 10, 2012.
[7] F. Ferrari, E. Grosso, G. Sandini, M. Magrassi, “A Stereo Vision System for Real Time Obstacle Avoidance in Unknown Environment,” Proceedings, IROS '90, IEEE International Workshop on Intelligent Robots and Systems '90. [8] D. Scharstein, R. Szeliski, “A Taxonomy and Evaluation of Dense Two‐Frame Stereo Correspondence Algorithms,” International Journal of Computer Vision, vol. 47, no. 1/2/3, pp. 7‐
42, 2002.
[9] S. Birchfield, C. Tomasi, “Depth Discontinuities by Pixel‐to‐Pixel Stereo,” International Journal of Computer Vision, vol. 35, no. 3, pp. 269‐293, 1999.
Questions??
Download