Previous [ 1] [ 2] [ 3] [ 4] [ 5] [ 6] [ 7] [ 8] [ 9] [ 10] [ 11] [ 12] [ 13]


Journal of Information Science and Engineering, Vol. 29 No. 6, pp. 1265-1283 (November 2013)

Video-Driven Creation of Virtual Avatars by Component-based Transferring of Facial Expressions*

Department of Computer Science and Information Engineering
National Dong Hwa University
Hualien, 974 Taiwan

This paper proposes an efficient and economic video-driven technique that enables the instant creation of a large diversity of virtual avatars and the automatic syntheses of vivid facial animations. The proposed technique addresses the expression transferring problem which transfers a given facial expression of a source human character to the corresponding one of a synthesized avatar. In tackling the expression transferring problem, we propose a component-based approach which is more appealing than the existing approaches which treat the whole face as a single unit for expression transferring. Our approach acquires a much higher diversity in synthesizing virtual avatars and facial expressions by composing the synthesized target face from the facial components of different avatars. The proposed method achieves a good way to transfer the synthesizing parameters acquired from the source human face to those of the target avatar face which complies well with the person-specific characteristics of the target avatar. Additionally, the removal of color inconsistencies among the facial components from different avatars is also well handled. Some experimental results are demonstrated to show that the proposed method can achieve interesting and colorful transfers of facial expressions and synthesize a large diversity of virtual avatars instantly.

Keywords: facial expression synthesis, active appearance model, color correction, facial feature tracking, virtual avatar

Full Text () Retrieve PDF document (201311_12.pdf)

Received May 16, 2011; revised November 11, 2011 & February 22, 2012; accepted March 3, 2012.
Communicated by Tyng-Luh Liu.
* This work was supported by the National Science Council of Taiwan under the Grant No. 97-2221-E-259-028- MY3 and 100-2221-E-259-032-.