Automatic facial expression recognition
Automatic recognition of facial features to estimate a person’s age, gender and emotions expression is still an actively emerging research. Due to the high variability of human facial expressions, reliable prediction of a person’s age or emotions is still a challenge even for advanced neural models. Much better results achieved at gender recognition. Here modern solutions consistently show an average accuracy level above 90%.
BitRefine Heads video analysis platform offers a neural module, that takes a face as an input and returns recognized parameters, like, “male, 35 years old, neutral expression”. The whole processing pipeline starts typically with an RTSP stream from an IP camera. Then the video goes to an effective face detector that is capable of locating and extracting tens of faces within a single Full-HD frame. After that, each detected face gets its own ID by the tracker and goes to the age, gender, and recognition neural module. Received estimation on a person’s age, gender and facial expression saved to DB and will be available for search and reporting.
As BitRefine is a flexible modular facial expression recognition software, the user can also add additional processing modules to the pipeline. For example, cross line counters that will be registering peoples’ movement directions in addition to their facial features.
In order to increase the accuracy of the age, gender and emotion neural module the BitRefine’s processing pipeline automatically takes several images of a person’s face, taken from different angles and does a final estimation based on the combined results. This means that the more person stays in the frame, the more accurate results will be acquired and recorded in DB. A live brain works in a similar way, trying to correct or verify its initial impression by looking at a person’s face for some extra time.
After the recognition pipeline is set up and the system began to collect information about detected people, the user can turn to reports. By default, the system shows charts with numbers of all detected objects. In order to see statistics from gender recognition, the user just needs to specify “face.gender” as a property of interest. The reporting tool will show two charts on the same timeline with the numbers of men and women. A pie chart will also provide a visual sense of men/women share of the total number of detected people. If a user needs statistics on people’s age, he chooses “face.age” as a property of interest and system returns charts, reflecting distribution based on the age. In a similar way works access to the emotions recognition results.
The user is free to apply any filter he needs. For example, he can see statistics only for the people of age above 40 years old, males. Or, if there were counting lines in the processing pipeline, the user can get statistics only for the people, who crossed that pipeline.
Related Solutions
We’ve got more video recognition to share. Let’s be friends. But you first...