Face Transfer with Multilinear Models

DSpace/Manakin Repository

Face Transfer with Multilinear Models

Show simple item record

dc.contributor.author Vlasic, Daniel
dc.contributor.author Brand, Matthew
dc.contributor.author Pfister, Hanspeter
dc.contributor.author Popovic, Jovan
dc.date.accessioned 2010-06-16T13:38:49Z
dc.date.issued 2006
dc.identifier.citation Vlasic, Daniel, Matthew Brand, Hanspeter Pfister, and Jovan Popovic. 2006. Face transfer with multilinear models. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2006 Courses: July 30-August 3, 2006, Boston, Massachusetts, ed. J. Dorsey, 426-433. New York, N.Y.: ACM Press. en_US
dc.identifier.isbn 1-59593-364-6 en_US
dc.identifier.uri http://nrs.harvard.edu/urn-3:HUL.InstRepos:4238955
dc.description.abstract Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate and drive a detailed 3D textured face mesh for a target identity, which can be seamlessly rendered back into target footage. The underlying face model automatically adjusts for how the target performs facial expressions and visemes. The performance data can be easily edited to change the visemes, expressions, pose, or even the identity of the target---the attributes are separably controllable. This supports a wide variety of video rewrite and puppetry applications.Face Transfer is based on a multilinear model of 3D face meshes that separably parameterizes the space of geometric variations due to different attributes (e.g., identity, expression, and viseme). Separability means that each of these attributes can be independently varied. A multilinear model can be estimated from a Cartesian product of examples (identities x expressions x visemes) with techniques from statistical analysis, but only after careful preprocessing of the geometric data set to secure one-to-one correspondence, to minimize cross-coupling artifacts, and to fill in any missing examples. Face Transfer offers new solutions to these problems and links the estimated model with a face-tracking algorithm to extract pose, expression, and viseme parameters. en_US
dc.description.sponsorship Engineering and Applied Sciences en_US
dc.language.iso en_US en_US
dc.publisher Association for Computing Machinery en_US
dc.relation.isversionof doi:10.1145/1185657.1185864 en_US
dc.relation.hasversion http://gvi.seas.harvard.edu/sites/all/files/SIG2005-FACES_0.pdf en_US
dash.license LAA
dc.subject facial animation en_US
dc.subject computer vision—tracking en_US
dc.title Face Transfer with Multilinear Models en_US
dc.type Monograph or Book en_US
dc.description.version Accepted Manuscript en_US
dc.relation.journal ACM SIGGRAPH 2006 Courses en_US
dash.depositing.author Pfister, Hanspeter
dc.date.available 2010-06-16T13:38:49Z

Files in this item

Files Size Format View
Vlasic_Face.pdf 13.75Mb PDF View/Open

This item appears in the following Collection(s)

  • FAS Scholarly Articles [7078]
    Peer reviewed scholarly articles from the Faculty of Arts and Sciences of Harvard University

Show simple item record

 
 

Search DASH


Advanced Search
 
 

Submitters