Evalutating Face Models Animated by Mpeg-4 Faps: J.Ahlberg, I.S.Pandzic, L.You Image Coding Group Linköping University

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 16

Evalutating Face Models animated by MPEG-4 FAPs

OZHCI 2001 Talking Head workshop

J.Ahlberg, I.S.Pandzic, L.You Image Coding Group Linkping University

OZHCI 2001, Talking Head workshop

Outline

Purpose Creating Test Data Performing the Test Measuring the Results Reproducing the Test Conclusion

OZHCI 2001, Talking Head workshop

Purpose

To investigate how well animated face models can express emotions when controlled by low level MPEG-4 FAPs reproducing the motion captured from real faces acting as the emotion. Proposing a standard benchmark test for MPEG-4 animated face models.

OZHCI 2001, Talking Head workshop

Creating Test Data

Feature points were tracked using a system with four IR-sensitive cameras and IR-markers. Different emotions were acted out, without any other action as well as when reading a sentence. Video was recorded simultaneously.

OZHCI 2001, Talking Head workshop

The Tracking System

OZHCI 2001, Talking Head workshop

The Test Sequences

21 sequences with 4 different people acting out different emotions. Each sequence recorded as a real video and also synthesized using FAPs with two different models. Thus, in total, 63 sequences were created.

OZHCI 2001, Talking Head workshop

The Test Sequences

Real video

Jorgen model (Candide-3)

Oscar model

OZHCI 2001, Talking Head workshop

Performing the Test

150 persons each watched 2/3 of the sequences (each sequence watched by 100 persons). The subjects marked for each sequence which emotion they thought the sequence showed. Each sequence was shown a few consecutive times. The following slides show the layout of the test.

OZHCI 2001, Talking Head workshop

OZHCI 2001, Talking Head workshop

OZHCI 2001, Talking Head workshop

OZHCI 2001, Talking Head workshop

OZHCI 2001, Talking Head workshop

Measuring the Results

Compare the synthetic, real, ideal and random cases. Absolute Expressive Performance
Compare the dispersion matrices and calculate the

L1-norm of the differences. Normalize so that the random case == 0 and the ideal case == 100.

Relative Expressive Performance


Percentage of the AEP for the real case.

OZHCI 2001, Talking Head workshop

Test Results

Absolute Expressive Performance


The real case: 58.1 The Oscar model / FAE: 9.1

The Jorgen model (Candide-3) / MpegWeb: 9.4

Relative Expressive Performance


The Oscar model / FAE: 15.6

The Jorgen model (Candide-3) / MpegWeb: 16.2

Difference between the two synthetic cases statistically insignificant.

OZHCI 2001, Talking Head workshop

Reproducing the Test

The test is intended to be reproducible by anyone who wants to test/compare their face model(s). Video-, FAP- and SMIL-files will be available in a package that can be downloaded for free from the Image Coding Group website.

OZHCI 2001, Talking Head workshop

Conclusion

The expressive performance of the face models much worse than of the real video. No significant difference between the two face models. Main result: A reproducible test proposed as a standard benchmark test for the expressive performance of face models.

You might also like