An algorithm that judges your face – Pieuvre.ca

When two people meet, they automatically rate each other, making quick judgments on everything from the other person’s age, to their intelligence, to their reliability, all based solely on their their appearance. These first impressions, if they are often erroneous, can be extremely powerful, and modulate our relationships, in addition to having an impact on all kinds of things, such as the decision to hire someone, or to hand down a sentence. during a criminal trial. Researchers have taken an interest in this phenomenon by teaching an artificial intelligence to act in this way… In order to correct these instinctive reactions.

These scientists, attached to the Stevens Institute of Technology, in collaboration with Princeton and Chicago Universities, have thus trained an artificial intelligence algorithm so that it can model this first impression and predict how people will be perceived according to a photo of their face. This work has been published in Proceedings of the National Academy of Sciences.

“There is a lot of work that focuses on modeling the physical appearance of people’s faces,” says Jordan W. Suchow, a cognition scientist and AI expert. “We bundle all of this with human judgments, and we use machine learning to study people’s biased first impressions when they meet someone else. »

Mr Suchow and his team asked thousands of people to give their first impressions of more than 1,000 computer-generated photos of faces, using criteria such as level of intelligence, ability to be elected, on the religious side, the fact of being trustworthy or not, as well as the impression that the “person” is extrovert (or not). The responses were then used to “train” a neural network to make quick judgments about people, based solely on a photo of their face.

“With your face, we can use this algorithm to predict what people’s first impressions will be, and what stereotypes will be projected to you, once they see your face,” says Suchow.

Most of the algorithm’s findings correspond to collective intuitions or cultural assumptions: people who smile tend to be seen as more trustworthy, for example, while those who wear glasses are seen as more intelligent. In other cases, it’s a bit harder to figure out why the algorithm assigns a particular trait to a person.

“The algorithm does not provide targeted feedback, nor does it explain why a specific image elicits a particular judgment,” said Suchow. “But even then, it can help us understand how we are perceived – we could classify a series of photos according to whether they made an individual appear trustworthy, for example, which makes it possible to make choice on how to show off. »

Although first developed to help psychology researchers generate images of faces for use in experiments on perception and social cognition, the new algorithm could also be used in everyday life. Individuals typically cultivate a public persona in painstaking ways, for example, by only sharing photos that will make them look smart, confident or attractive, and it’s easy to see how this algorithm could aid that process, Suchow continued.

Since there are already social norms around putting yourself forward in a positive way, it circumvents some of the ethical issues surrounding the technology, he added.

Handling risks

Even more disturbingly, the algorithm can also be used to manipulate photos to make their subjects match certain traits more – for example, by transforming a politician to appear more trustworthy, or making his opponent “ less intelligent” or “more suspicious”. If AI tools are already being used to create fake videos showing events that never happened, the new algorithm could subtly alter real footage to manipulate the public’s opinion of who is there. on the snapshot.

“With this technology, it’s possible to take a photo and create an edited version of it that’s designed to give a certain impression,” warns Suchow. “For obvious reasons, we have to be careful how this technology is used. »

To protect the latter, precisely, the researchers filed a patent and are currently working on the creation of a new company to offer licenses to use the algorithm for pre-approved ethical purposes. “We are doing everything we can to ensure that the algorithm will not be used to cause harm. »

While the current version of the algorithm focuses on average responses to a given face among a large group of people, Suchow hopes to develop another algorithm, this time able to predict how a single person will respond to another face. This could provide much more comprehensive insight into how quick judgments come to modulate our social interactions, and potentially help people recognize them and move beyond first impressions when we make important decisions.

“It’s important to remember that the judgments we model don’t reveal anything about a person’s true personality or skills,” says Suchow. “What we’re doing here is studying stereotypes in people, and that’s something we should all be working to understand better. »

Don’t miss any of our content

Encourage Octopus.ca

We want to thank the author of this short article for this amazing web content

An algorithm that judges your face – Pieuvre.ca


You can find our social media profiles , as well as other pages that are related to them.https://www.ai-magazine.com/related-pages/