Professional Documents
Culture Documents
Interpolation Synthesis of Articulated Figure Motion: Virtual Reality
Interpolation Synthesis of Articulated Figure Motion: Virtual Reality
Virtual Reality
Interpolation
Synthesis of
Articulated Figure Douglas J. Wiley
Naval Research Laboratory
ost conventional media depend on similar to that desired to form an exactly specified
M engaging and appealing characters.
Empty spaces and buildings would not fare well as tele-
motion. A small set of example motions then yields a
continuous multidimensional space of possible motions.
vision or movie programming, yet virtual reality usual- The example motions can come from keyframing, phys-
ly offers up such spaces. The problem lies in the difficulty ical simulations, or motion capture. Because the inter-
of creating computer-generated characters that display polation process generally preserves the motion’s subtle
real-time, engaging interaction and realistic motion. qualities, we can create reusable libraries of motion data
Articulated figure motion for real-time computer from a single step of laborious keyframing, iterative esti-
graphics offers one solution to this problem. A common mation of initial conditions, or correction of errors and
approach stores a set of motions and lets you choose one dropouts, respectively.
particular motion at a time. This article describes a This article focuses on real-time
process that greatly expands the range of possible interpolation synthesis of motion Interpolation synthesis
motions. Mixing motions selected from a database lets based on motion capture data. This
you create a new motion to exact specifications. The syn- approach works well for real-time based on motion capture
thesized motion retains the original motions’ subtle character motion in interactive
qualities, such as the realism of motion capture or the entertainment or for avatars in mul- data provides real-time
expressive, exaggerated qualities of artistic animation. tiplayer networked games or virtual
Our method provides a new way to achieve inverse kine- worlds. After transmission of a character motion for
matics capability—for example, placing the hands or motion database, network traffic
feet of an articulated figure in specific positions. It proves would consist merely of a motion interactive entertainment or
useful for both real-time graphics and prerendered ani- specification indicating parameters
mation production. of the interpolation. avatars in virtual worlds.
40 November/December 1997
.
We use spherical linear interpolation13 for each quater- We make a final interpolation of the poses i0 and i1
nion orientation component (p, q, and result r): based on the target z-axis position.
This direct computation results in a pose that, given
sin((1 − u )W ) sin(uW ) sufficiently dense data, closely approximates the target
r = q* +p* , goal. We can enforce limb length if necessary and can
sin(W ) sin(W )
accommodate higher accuracy or sparse data using iter-
u in [0,1], W = arccos q • p (2) ative optimizations.
Three degrees of freedom suffice for the simple char-
In general, we obtain interpolation coefficient u by acter representation used here. Six degrees of freedom
assuming linearity of the inputs with respect to the out- would merely require sufficient data varying in both
puts in each dimension. Given target t lying between position and orientation, and a higher dimensional
grid points l and h, interpolation procedure.
t−l (3)
u= d 010 i 10 d 110
h−l 3 Trilinear
d 011 i 11 d 111 interpolation
We use the same interpolation coefficient for the corre- i0 binary tree: four
t
sponding quaternion interpolations. interpolations
We first perform four interpolations along the x axis. i1 d 000 d 100 in the x direc-
We use interpolation coefficient u00 (Equation 4, below) d 001 tion, two in the
i 00
to interpolate each of the six position and orientation y y direction, and
i 01 d 101
components of data poses d000 and d100, resulting in a final interpo-
intermediate pose i00. The points shown in Figure 3 z x lation in the z
denote hand position in space corresponding to a par- (dxyz, i yz, i z)
direction.
ticular data or intermediate interpolation result pose.
42 November/December 1997
.
44 November/December 1997
.
example shows. Iterative optimization can improve 10. A. Witkin and Z. Popovic, “Motion Warping,” Computer
accuracy as needed, but adequately dense data with Graphics (Proc. Siggraph 95), Vol. 29, No. 4, Aug. 1995,
direct computation should suffice for many interactive pp. 105-108.
entertainment applications. The spatial ordering and 11. H. Ko and J. Cremer, “VRLoco: Real-Time Human Loco-
resampling techniques prove useful even if motions are motion from Positional Input Streams,” Presence, Vol. 5,
used exactly as stored because they provide a direct No. 4, 1996, pp. 367-380.
search and choice of sampling density for stored motion. 12. D. Tolani and N. Badler, “Real-Time Inverse Kinematics of
The emergence of low-cost, powerful processors and 3D the Human Arm,” Presence, Vol. 5, No. 4, 1996, pp. 393-401.
graphics hardware should make interpolation calcula- 13. A. Watt and M. Watt, Advanced Animation and Rendering
tions more feasible in future consumer graphics and Techniques, Addison-Wesley, Wokingham, England, 1993.
networked virtual reality, however. 14. J.E. Chadwick, D.R. Haumann, and R.E. Parent, “Layered
This approach offers a unique combination of realism Construction for Deformable Animated Characters,” Com-
and controllability for real-time motion synthesis. While puter Graphics (Proc. Siggraph 89), Vol. 23, No. 3, July
limited to a small number of parameters, the technique 1989, pp. 293-302.
provides a far richer range of motions than the use of
prestored motions. The combination of motion capture
and interpolation synthesis has the potential to animate
the characters needed in currently empty virtual reali-
ty environments and add a rich variety of motion to Douglas Wiley just completed a
avatars in networked virtual worlds. ■ DSc in computer science at George
Washington University while work-
Acknowledgments ing at the Naval Research Laborato-
Support for this work was provided by the Naval ry in Washington, D.C. His interests
Research Laboratory in Washington, D.C. under con- include motion capture data manip-
tract N00014-94-2-C002 and grant N00014-94-K-2009 ulation, real time graphics and vir-
to George Washington University. Special thanks to tual reality, and procedural modeling. He previously
House of Moves, Digital Domain, and ElektraShock of conducted research in optical signal processing at the
Venice Beach, Calif. for discussions about motion cap- Army Research Laboratory in White Oak, Maryland, after
ture in the entertainment industry. completing a PhD in Electrical Engineering at the Uni-
versity of Southern California. He holds BS degrees in elec-
trical engineering, mathematics, and computer science
from the University of Maryland in College Park, Mary-
References land. He is currently working as an independent consul-
1. J. Lasseter, “Principles of Traditional Animation Applied tant and contractor.
to 3D Computer Animation,” Computer Graphics (Proc. Sig-
graph 87), Vol. 21, No. 4, July 1987, pp. 35-43.
2. A. Witkin and M. Kass, “Spacetime Constraints,” Comput-
er Graphics (Proc. Siggraph 88), Vol. 22, No. 4, July 1988, James Hahn is an associate pro-
pp. 159-168. fessor in the Department of Electri-
3. K. Waters, “A Muscle Model for Animating Three-Dimen- cal Engineering and Computer
sional Facial Expression,” Computer Graphics (Proc. Sig- Science at the George Washington
graph 87), Vol. 21, No. 4, July 1987, pp. 17-24. University. He is the founding direc-
4. F. Parke, “A Parameterized Model for Facial Animation,” tor of the George Washington Uni-
IEEE Computer Graphics & Applications, Vol. 2, No. 9, Nov. versity Institute for Computer
1982, pp. 61-68. Graphics and the Laboratory for Advanced Computer
5. A. Bruderlin, and T. Calvert, “Goal-Directed, Dynamic Ani- Applications in Medicine. His research interests include
mation of Human Walking,” Computer Graphics (Proc. Sig- motion control, virtual reality, image rendering, and
graph 89), Vol. 23, No. 3, July 1989, pp. 233-242. sound. He received BS degrees from the University of
6. J. Granieri, J. Crabtree, and N. Badler, “Off-Line Produc- South Carolina in physics, mathematics, and computer
tion and Real-Time Playback of Human Figure Motion for science and an MS in physics from University of Califor-
3D Virtual Environments,” Proc. VRAIS 95, IEEE Comput- nia, Los Angeles. He received an MS and a PhD in com-
er Society Press, Los Alamitos, Calif., 1995. puter and information science from the Ohio State
7. A. Bruderlin and L. Williams, “Motion Signal Processing,” University in 1989.
Computer Graphics (Proc. Siggraph 95), Vol. 29, No. 4, Aug.
1995, pp. 97-104.
8. K. Perlin, “Real-Time Responsive Animation with Person- Readers may contact Wiley at 9211 Bardon Rd., Bethes-
ality,” IEEE Trans. on Visualization and Computer Graphics, da, MD 20814, e-mail doug-wiley@juno.com or dwi-
Vol. 1, No. 1, Mar. 1995, pp. 5-15. ley@seas.gwu.edu.
9. M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier Principles
for Emotion-Based Human Figure Animation,” Computer
Graphics (Proc. Siggraph 95), Vol. 29, No. 4, Aug. 1995,
pp. 91-96.