Accurate 3D Face Reconstruction from Weakly Calibrated Wide Baseline Images with Profile Contours
- LecturerYuping Lin (Ph.D. candidate, University of Southern California)
Host: Dr. Chu-Song Chen - Time2010-03-19 (Fri.) 10:30 – 12:00
- LocationAuditorium 107 at new IIS Building
Abstract
Abstract:
We propose a method to generate a highly accurate 3-D face model from
a set of wide-baseline images in a weakly calibrated setup. Our
approach is purely data driven, and produces faithful 3-D models
without any pre-defined models, unlike other statistical model-based
approaches. Our results do not rely upon a critical initialization step
nor parameters for optimization steps. We process 5 images (including
profile views), infer the accurate poses of cameras in all views, and
then infer a dense 3-D face model. The quality of 3-D face models
depends on the accuracy of estimated head-camera motion. First, we
propose to use an iterative bundle adjustment approach to remove
outliers in corresponding points. Contours in the profile views are
matched to provide reliable correspondences that link two opposite
side of views together. For dense reconstruction, we propose to use
a face-specific cylindrical representation which allows us to solve
a global optimization problem for N-view dense aggregation. Profile
contours are used once again to provide constraints in the optimization
step. Experimental results using synthetic and real images show that
our method provides accurate and stable reconstruction results on
wide-baseline images. We compare our method with state of the art
methods, and show that it provides significantly better results in
terms of both accuracy and efficiency.
In addition, we also developed an algorithm for efficient and accurate
3D reconstruction of urban scene. We present a novel approach to
perform 3D reconstruction in urban scenes from aerial imagery.
State-of-the art 3D urban scene reconstruction methods use Manhattan
world assumption to regularize the structure (scene is piecewise
planar, and planes are axis aligned), which is only valid for
ground-based images. Our approach, on the other hand, makes a more
general assumption that the planes are either horizontal or vertical.
Along with edge information that is prevalent in urban scene imagery,
we formulate the dense reconstruction problem as a 2-pass dynamic
programming problem, which can be solved efficiently. Moreover, our
algorithm is fully parallelizable which performs the reconstruction
of 1M points (with 160 discrete height levels) in less than 1 minute
on a GPU. Results preserve high level of detail and show high visual
quality.
Bio:
Yuping Lin received the B.B.A degree in information management from
National Taiwan University, Taipei, and the M.S. degree in computer
science from University of Southern California, Los Angeles, in 2003
and 2006, respectively. He is currently pursuing the Ph.D. degree in
University of Southern California, Los Angeles. He was in Institute
of Information Science, Academia Sinica, Taiwan, in year 2003 and 2005
spring, and in Siemens Medical Solutions, Philadelphia, in year 2008
summer. His recent research interests include image registration and
in 3D reconstruction.