Saliency Detection: A Boolean Map Approach Supplementary Materials Jianming Zhang Stan Sclaroff Department of Computer Science, Boston University {jmzhang,sclaroff}@bu.edu Saliency map samples for eye fixation prediction are shown in Fig 1-2; saliency map samples for salient object detection are shown in Fig 3-7. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In CVPR, 2009. A. Borji and L. Itti. Exploiting local and global patch rarities for saliency detection. In CVPR, 2012. N. Bruce and J. Tsotsos. Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9(3), 2009. M. Cheng, G. Zhang, N. Mitra, X. Huang, and S. Hu. Global contrast based salient region detection. In CVPR, 2011. A. Garcia-Diaz, X. Vidal, X. Pardo, and R. Dosil. Saliency from hierarchical adaptation through decorrelation and variance normalization. IVC, 2011. S. Goferman, L. Zelnik-Manor, and A. Tal. Context-aware saliency detection. PAMI, 34(10), 2012. J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In NIPS, 2007. X. Hou, J. Harel, and C. Koch. Image signature: Highlighting sparse salient regions. PAMI, 34(1), 2012. L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. PAMI, 20(11):1254–1259, 1998. T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In CVPR, 2009. J. Li, M. D. Levine, X. An, X. Xu, and H. He. Visual saliency based on scale-space analysis in the frequency domain. PAMI, 35(4), 2013. B. Schauerte and R. Stiefelhagen. Quaternion-based spectral saliency detection for eye fixation prediction. In ECCV, 2012. Y. Wei, F. Wen, W. Zhu, and J. Sun. Geodesic saliency using background priors. In ECCV, 2012. Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In CVPR, 2013. Figure 1. Saliency Maps for Eye Fixation Prediction. We show saliency maps from BMS, ∆QDCT [12], SigSal [8], LG [2], AWS [5], HFT [11], CAS [6], Judd [10], AIM [3], GBVS [7], Itti [9] on the MIT dataset [10] (the last two columns) and the Toronto dataset [3] (the rest). GT denotes the eye fixation heat map, generated by blurring the raw eye tracking map. Figure 2. Saliency Maps for Eye Fixation Prediction. We show saliency maps from BMS, ∆QDCT [12], SigSal [8], LG [2], AWS [5], HFT [11], CAS [6], Judd [10], AIM [3], GBVS [7], Itti [9] on the MIT dataset [10] (the first column) and the ImgSal dataset [11] (the rest). GT denotes the eye fixation heat map, generated by blurring the raw eye tracking map. Figure 3. Saliency Maps for Salient Object Detection. We show saliency maps from BMS, GSSP [13], HSal [14], RC [4], FT [1], HFT [11], CAS [6] on the ASD dataset [1]. GT denotes the ground truth mask. Figure 4. Saliency Maps for Salient Object Detection. We show saliency maps from BMS, GSSP [13], HSal [14], RC [4], FT [1], HFT [11], CAS [6] on the ASD dataset [1]. GT denotes the ground truth mask. 4 Figure 5. Saliency Maps for Salient Object Detection. We show saliency maps from BMS, HSal [14], RC [4], FT [1], HFT [11] and CAS [6] on the ImgSal dataset [11]. GT denotes the manually labeled ground truth by different subjects. Figure 6. Saliency Maps for Salient Object Detection. We show saliency maps from BMS, HSal [14], RC [4], FT [1], HFT [11] and CAS [6] on the ImgSal dataset [11]. GT denotes the manually labeled ground truth by different subjects. Figure 7. Saliency Maps for Salient Object Detection. We show saliency maps from BMS, HSal [14], RC [4], FT [1], HFT [11] and CAS [6] on the ImgSal dataset [11]. GT denotes the manually labeled ground truth by different subjects.