lunedì 24 marzo 2014

Scannerizzami il volto e ti dirò come sei!

di Alberto Carrara, LC

Un lavoro scientifico congiunto tra l’Università della California (San Diego) e quella di Toronto rivela su Current Biology lo scorso 20 marzo 2014, che un sistema computazionale è in grado di distinguere tra espressioni facciali (micro-espressioni facciali) reali o false di dolore, meglio rispetto ai comuni mortali.

La pubblicazione, che si può trovare sul sito di Cell, si intitola: Automatic Decoding of Deceptive Pain Expressions, e si centra sui sistemi neuronali che sottendono alle cosiddette “microespressioni” facciali. Come sottolineano gli autori, nell’essere umano, l’espressione del volto risulta cruciale per la comunicazione di stati d’animo, specie legati al dolore, essendo l’uomo un animale-relazionale, oltre che razionale.


Due sono i sistemi di controllo dei movimenti facciali:

·        Il sistema motorio extrapiramidale subcorticale che dirige le espressioni facciali spontanee delle emozioni provate
·        Il sistema motorio piramidale corticale che controlla le espressioni facciali volontarie.

Ecco l’highlights e l’abstract del lavoro:

Highlights
Untrained human observers cannot differentiate faked from genuine pain expressions.
With training, human performance is above chance but remains poor.
A computer vision system distinguishes faked from genuine pain better than humans.
The system detected distinctive dynamic features of expression missed by humans.

Summary

In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

Nessun commento:

Posta un commento