Medicine News

Neuroscientists construct 3D facial models using data from a person’s brain when remembering the face of someone familiar


The code that our brains use to tell faces apart has been reverse engineered.

This is because, for the first time, neuroscientists were able to construct 3D facial models using information stored in an individual’s brain when recalling faces of familiar people.

The study, published in the Nature Human Behaviour journal, came about as a way for the researchers to explore how the human brain identifies faces.

“Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations,” the researchers—a team of scientists from the University of Glasgow – said in their study, noting that these mental representations are often used to identify familiar faces under various conditions such as pose, illumination and ageing, as well as to draw resemblance between family members.

However, the actual information contents of these representations are rarely characterized – a fact that, according to the researchers, essentially hinders the understanding of the mechanisms that use them.

In order to fully explore and understand this mechanism, the researchers showed pictures of four of their colleagues’ faces to 14 other university members.

After that, the researchers then set about trying to determine which specific facial features were used by the participants to identify their colleagues’ faces from memory.

The volunteers repeatedly compared their recollection of one of the four real people’s faces with six randomly generated faces of the same age, ethnicity and gender.

The participants then picked the randomly generated face that was most like their memory of the real person’s face and ranked how similar they were.

Afterward, the researchers were able to work backward in order to determine which physical features the volunteers relied on to remember a given face. (Related: Putting a face to a name: Research shows women are better than men at recalling verbal information and faces.)

“It’s difficult to understand what information people store in their memory when they recognise familiar faces. But we have developed a tool which has essentially given us a method to do just that,” said Philippe Schyns, Professor of Visual Cognition at the Institute of Neuroscience and Psychology.

“By reverse-engineering the information that characterises someone’s identity, and then mathematically representing it, we were then able to render it graphically,” Schyns added.

Once they gathered the results from the initial study, the researchers then used a database of 355  faces, each of which was characterized by its shape and texture, to create a generative model of 3D face identity.

The research team then used this model to test the validity of the initial results – a feat they performed by pooling together a new set of participants and asking them to rate the similarities between their recollections of a familiar face with the model’s randomly generated faces.

According to the researchers, by keeping the same shape and texture information relating to age, ethnicity and sex as the real faces, they could isolate each face’s unique identity information.

“Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms,” the researchers concluded in their study.

The study is now seen to become a cornerstone for the greater understanding of the brain mechanisms behind face identification and might see potential applications such as the generation of faces in fields such as artificial intelligence (AI), video games and even criminal justice.

Sources include:

DailyMail.co.uk

Nature.com

NeuroScienceNews.com



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES