Class-Specific Hough Forests for Object Detection Zhen Yuan Hsu Advisor:S.J.Wang Gall, J., Lempitsky, V.: Class-specic hough forests for object detection. In: IEEE CVPR(2009) Outline • • • • Related work Why we use Random forest What’s Hough forest How Hough forest work for object detection Implicit shape models: Training • • • • Extract 25x25 patches around Harris corners. Generate a codebook of local appearance patches using clustering. For each cluster, extract its center and store it in the codebook. For each codebook entry, store all positions it was found relative to object center. Implicit shape models: Testing 1. 2. 3. 4. Given test image, extract patches, match to codebook entry Cast votes for possible positions of object center Search for maxima in voting space Extract weighted segmentation mask based on stored masks for the codebook occurrences Match 、offset Why we use Random forest Random forestdata Time、Training Decision tree x2 x1>w1 Yes No W2 x2>w2 Yes No W1 x1 A Forest v leaf nodes split nodes v tree t1 …… category c category c tree tT What’s Randomness Randomness – Data and Split fuction for each node: Split fuction is randomly selected. Binary Tests split node .P :16*16 image feature .q a threshold choice • selected during training from a random subset of all split functions. Randomness - Split fuction • Try several lines, chosen at random • Keep line that best separates data – information gain • Recurse Random forest for object detection y Object localization x:regression x data Classfying patch belong to object c:classification What’s Hough forest Random forest Hough vote Hough forest Hough Forests:Training • Supervised learning • Label: negative or background samples (blue) positive samples (red) offset vectors (green) Feature of local patch Hough Forests:Training leaf nodes split nodes …… CL : positive sample patch proportion Leaves two important information for voting: 1.CL : positive sample patch proportion 2. DL={di} , iϵA Stop criteria Leaf condition: 1. number of image patches < ϵ 2.a threshold based on minimum of uncertainty(Class-label , Offset vector) Quality of Binary Tests • Goal: Minimize the Class-label uncertainty and Offset uncertainty: • Type of uncertainty is randomly selected for each node • Class-label uncertainty: • Offset uncertainty: A=the set of all image patch={ Ci=class label } Detection Position y . Original image Interest points Matched patches Detection Position y . …… 1.CL : positive sample patch proportion 2.DL={di} iϵA Possible Center of objet:y+di Hough vote d2 d1 Position y . d3 Probabilistic votes Source: B. Leibe Hough vote For location x and given image patch I(y) and tree T x:center of bounding box x≈y+di Over all trees: • Confidence vote: 1.CL =weight 2. di :offest vector Accumulation over all image patches: Detection Multi-Scale and Multi-Ratio • Multi Scale: 3D Votes (x, y, scale) • Multi-Ratio: 4D Votes (x, y, scale, ratio) UIUC Cars - Multi Scale • Wrong (EER) • Correct Comparison Pedestrians (INRIA) Pedestrians (INRIA) Pedestrians (TUD) reference • • • http://mi.eng.cam.ac.uk/~tkk22/iccv09_tutorial 利用霍夫森林建構行人偵測技術- 清華電機系 陳仕儒碩士論文2012 An Introduction to Random Forests for Multi-class Object Detection, J.Gall • Thank you for your listening!