Publication: Learning Silhouette Features for Control of Human Motion
Open/View Files
Date
2005
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
Association for Computing Machinery
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Ren, Liu, Gregory Shakhnarovich, Jessica K. Hodgins, Hanspeter Pfister, and Paul Viola. 2005. Learning silhouette features for control of human motion. ACM Transactions on Graphics 24(4): 1303-1331.
Research Data
Abstract
We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.
Description
Other Available Sources
Keywords
animation interface, computer vision, machine-learning, motion capture, motion control, performance animation
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service