Institute of Information Science, Academia Sinica



Press Ctrl+P to print from browser


Learning Bottom-Up and Top-Down Saliency Maps

  • LecturerProf. Ming-Hsuan Yang (EECS at University of California, Merced)
    Host: Dr. Chu-song Chen
  • Time2011-12-29 (Thu.) 10:30 – 12:00
  • LocationAuditorium 106 at new IIS Building

Visual saliency detection is a challenging problem in computer vision with great importance that finds numerous applications. In the first part of this talk, I will present a novel model for bottom-up saliency within the Bayesian framework by exploiting low and mid level cues. In contrast to most existing methods that operate directly on low level cues, we propose a coarse-to-fine approach in which a coarse saliency region is first obtained via a convex hull of interest points. Next, we analyze the saliency information with mid level visual cues via superpixels. We present a Laplacian sparse subspace clustering method to group superpixels with local features, and analyze the results with respect to the coarse saliency region to compute the prior saliency map. In the meanwhile, we use the low level visual cues based on the convex hull to compute the observation likelihood, thereby facilitating inference of Bayesian saliency at each pixel. Extensive experiments on a large data set show that our Bayesian saliency model performs favorably against the state-of-the-art algorithms.