2016 Looking at People CVPR Challenge
ChaLearn LAP and FotW Challenge and Workshop @ CVPR2016
Face Analysis Workshop and Challenge
Caesar's Palace, Las Vegas, Nevada
July 1, 2016
ChaLearn organizes for CVPR2016 three parallel quantitative challenge tracks on RGB face analysis.
Track 1: Apparent Age Estimation: An extended version of the previous ICCV2015 challenge dataset. It contains 8,000 images each displaying a single individual, labeled with the apparent age. Each image has been labeled by multiple individuals, using a collaborative Facebook implementation and Amazon Mechanical Turk. The votes variance is used as a measure of the error for the predictions. This is the first state of the art database for Apparent Age Recognition rather than Real Age recognition.
NEWS
March 23: Final results for Track 1 (Apparent Age Estimation) are now available. Thank you all for participating, we wish you enjoyed the challenge!
January 25: Tracks 1 (Apparent Age Estimation), 2 (Accessories Classification) and 3 (Smile and Gender Classification) started. Enjoy the Challenge!
Figure 1: Samples of the Apparent Age Estimation tracks
Track 2: Accessories Classification: The aim of this track is to detect and classify complements and accessories worn by the subjects. It uses a fraction of the Faces of the World dataset, a challenging dataset consiting of 8,000 images, each displaying a single individual, labelled with the accessories they are wearing.
Track 3: Smile and Gender Classification: In this track, participants will have to classify images of the FotW dataset according to gender (male, female or other) and basic expression (smiling, neutral or other expression). Along with accurate face detection and aligment, this track will require robust feature selection and extraction in order to identify the subject’s gender and expression, which can be difficult to classify even to the human eye in uncontrolled environments such as those present in the FotW dataset.
Top three ranked participants on each track will receive a prize according to the available budget, and invited to follow the workshop submission guide for inclusion of a description of their system at the CVPR 2016 conference proceedings.
Thanks to our CVPR 2016 sponsors: Microsoft Research, University of Barcelona, Amazon, INAOE, Google, NVIDIA corporation, and Facebook. This research has been partially supported by projects TIN2012-39051 and TIN2013-43478-P.