All of the exiting gaze-sensing techniques require careful and often labor-intensive calibration in prior to gaze estimation. This limitation severely impairs the use of gaze estimation techniques for various applications. In this talk, we introduce our two recent attempts to overcome such a limitation of existing gaze estimation techniques. Our key idea is to make use of user action and the properties of our vision system, more specifically, a computational model of visual saliency, in the context of appearance-based gaze estimation.
Yoichi Sato is a professor at Institute of Industrial Science, the University of Tokyo, Japan. He received the BSE degree from the University of Tokyo in 1990, and the MS and PhD degrees in robotics from the School of Computer Science, Carnegie Mellon University, in
1993 and 1997 respectively. His research interests include physics-based vision, reflectance analysis, image-based modeling, tracking and gesture analysis, and computer vision for HCI. He served/is serving to conference organization and journal editorial roles including IEEE PAMI, IJCV, and ECCV2012 Program Chair.