Face recognition system is a technology that used to astound us decades ago. Today it is widespread and used in safety systems. If you are a MAC lover open your iPhone X wallet case and unlock your phone by using FACE ID. This recognition system is created by painting 30,000 points of infrared light on and around your face to create a depth map of your face and capture its image.
Windows 10 uses biometric security as well. But what about recognizing and understanding the dynamics of human faces? Well, that’s what we have FACS for.
FACS might be just exciting as it sounds! It stands for Facial Action Coding System, in case you didn’t know. Some men might wonder if it’s possible to implement this system somewhere convenient, like their brain as they need assistance in distinguishing their beloved ones’ facial expressions, for often they miss the signs.
Are you engaging your orbicularis oris, and frontalis and par lateralis? If you don’t know what this means you should get interested in facial recognition coding system. Body language often tells us more than the words do, and out of all our body, it must be our face that is the most talkative one. Each movement on our face tells about what and how we feel. If someone tells you ‘I’m ok’ while bringing down lip corners just a little bit, you know there is something wrong no matter what they said.
FACS is a system used for coding various facial expressions and movements. It originated in Sweden in 1970, thanks to anatomist Carl Herman Hjortsjö, encompassing 23 facial movements. Later on, it was adopted and further developed by Ekman, Friesen, and Hagen who published a significant update to FACS in 2002.
This system is actually an index of facial expressions. It is based on an anatomical analysis of the facial muscles movement. Each movement, sparked by an emotion, leads to a different expression, engaging certain muscles. For one movement we use just one muscle, for another two or more. But it’s not just about the final expression, but about how the movement behind it emerges. This is why it is important for FACS users to learn and understand the muscular mechanics and a dynamics of a movement.
The deeper understanding of the movements also provides us with the identification of more subtle expressions. Besides the expressions of basic emotions, such as happiness, sadness, anger or joy, there is a number of ambiguous and subtle expressions that can be identified by FACS. The system takes into account face features, eyebrows, lips, nostrils, eyes, but discards sweating, pimples or changes in muscle tonus.
Thanks to this anatomical system we can measure facial expressions, breaking them down into singular movements, relaxations or contractions of a muscle. Each individual movement of one or more muscles is identified as an Action Unit (AU). Further, each AU can appear independently or in combinations.
So coding means identification of a facial expression and its deconstruction into Action Units. Thanks to AUs, we can gain a deeper understanding of one’s expressions, going beyond the basic emotions, such as happiness, sadness, anger etc. While AUs relate to a movement of one or group of muscles, Action Descriptors relate to a movement of several muscle groups.
FACS also recognizes intensity of the movements, marking them with letters from A to E. Direction of the movements is also taken into account, where leaning to the right is marked with R, left with L and unilateral movement, even on both sides with U.
There are also codes for eye and head movements, visibility codes etc. AU0 which is Action Unit number 0, stands for neutral face, while for example, outer brow raiser is AU number 2, but if we raise our brow to the maximum only on the right side we get AU1ER.
FACS has found various applications in different settings, both professional and private. Science uses it for a research, for example when evaluating emotional impairments. It is now widely used in the film industry by animators. It functions as a computer automated system, that identifies faces in videos, derives its features and creates a temporal profile of each movement. But going from manual to automatic coding is an ongoing process, and more technological adjustments are needed to make life easier for coders.
It’s AU12 on her face when you say that you have secretly planned a trip for two of you. If your beloved one loves to travel, it must be AU12! Sometimes we have a hard time decoding or understanding other people’s feeling, so how far technology can go in demystifying humans?
Will technology become better in understanding something it can only superficially mimic? In terms of safety or better understanding of our behavior we can only hope it will go further.
Ed Bryant has been a writer for Idkmen.com for more than two years now. He mostly writes on the topics and products which add value to the life of the modern male. Always informational and entertaining.