A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70% for the “mouth opening” and accuracy of 66% for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.

LA CASCIA, M., VALENTI, L., SCLAROFF, S. (2004). Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video. In IEEE 6th Workshop on Multimedia Signal Processing [10.1109/MMSP.2004.1436520].

Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video

LA CASCIA, Marco;
2004-01-01

Abstract

A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70% for the “mouth opening” and accuracy of 66% for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.
IEEE International Workshop on Multimedia Signal Processing
Siena
29 settembre-1 ottobre 2004
6
LA CASCIA, M., VALENTI, L., SCLAROFF, S. (2004). Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video. In IEEE 6th Workshop on Multimedia Signal Processing [10.1109/MMSP.2004.1436520].
Proceedings (atti dei congressi)
LA CASCIA, M; VALENTI, L; SCLAROFF, S
File in questo prodotto:
File Dimensione Formato  
2004 mmsp.pdf

accesso aperto

Dimensione 1.04 MB
Formato Adobe PDF
1.04 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/4784
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact