Title: Robotic Teaching System: 3D vision model acquisition through humancomputer interaction Abstract: We are designing a human-computer interaction (HCI) system for teaching a robot to acquire vision models of 3D objects from their 2D images. The purpose of this study is to establish efficient collaboration between machine and human so that the total performance of robotic vision system could be improved. Building 3D vision models is carried based on the following two guiding principles of HCI: 1) Provide the human with as much visual assistance as possible to help the human make a correct input; and 2) Verify each input provided by the human for its consistency with the priori inputs. In this collaborative approach, for example, epipolar lines superimposed on the 2D images assist the human to facilitate the 3D feature reconstruction of stereo correspondences by the computer system. The human can interactively suggest the next-best viewpoint for robot to capture images, examine outputs of different feature extractions with a menu of low-level segmentators, and choose the one that is perceptually most appropriate to acquire the vision model. The 3D vision models are then used for the object localization in the robotic manipulation. Online demos and videos validate this study, funded by Ford Motor Company. Keywords: Robot vision, Human-computer interaction, Image-based object model acquisition Presenter: Dr. Yuichi Motai, School of Electrical and Computer Engineering, Purdue University Bio: Yuichi Motai received a Bachelor of Engineering degree in Instrumentation Engineering from Keio University in 1991, a Master of Engineering degree in Applied Systems Science from Kyoto University in 1993, and a Ph.D. degree in Electrical and Computer Engineering from Purdue University in 2002. He was a tenured Research Scientist at the Secom Intelligent Systems Laboratory from 1993 to 1997. His research interests are in the broad area of computational intelligence; especially in computer vision, humancomputer interaction, image synthesis, ubiquitous computing, personal robotics, and smart sensing.