Skip to main content

Technologies

You are here

Tracking and Recognizing Facial Expressions

Print Save as PDF

Overview

This technology explores the use of local parameterized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performs with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.

For more information, contact the Office of Technology Commercialization, 301-405-3947 or [email protected].

Contact Info

UM Ventures
0134 Lee Building
7809 Regents Drive
College Park, MD 20742
Email: [email protected]
Phone: (301) 405-3947 | Fax: (301) 314-9502