New Delhi, India

Artificial intelligence (AI) continues to push the boundaries of what was once thought impossible. Its remarkable prediction capabilities have captured widespread attention. From predicting stock market trends to diagnosing diseases, the technology has demonstrated an uncanny ability to analyse vast amounts of data and make accurate forecasts. 

Advertisment

Now, new research conducted by a team of researchers from Denmark and Sweden suggests that artificial intelligence may be able to predict a person's political views based on their appearance, posing potential privacy concerns. 

Study findings

The study found that the AI model accurately predicted the political affiliations of people in around 61 per cent of cases. 

Advertisment

As per the research published in the journal Nature, variations in facial expressions were linked to political views. The model concluded that, due to their smiles, "both male and female right-wing composites appeared happier than their left-wing counterparts", while liberal candidates displayed more neutral expressions.

Women who exhibited a facial expression of contempt — neutral eyes and one corner of the lips lifted — were associated with more liberal politics by the model.

Also read | Previously undocumented effect of IVF uncovered; helps 1-in-5 women conceive naturally second time around

Advertisment

The researchers also observed a correlation between a candidate's level of attractiveness and their political ideology. 

As per Business Insider, women deemed attractive by their beauty scores were predicted to have conservative views, although there was no similar correlation found between men's attractiveness and right-wing leanings.

"Politicians on the right have been found to be more attractive than those on the left," states the study.

What data set was used?

The study aimed to demonstrate the privacy threat resulting from the combination of deep learning techniques and easily accessible photographs. For this purpose, the researchers used a dataset of 3,233 images of Danish political candidates and employed facial recognition and predictive analytics to assess facial expressions and attractiveness scores.

How is this ability a threat?

The findings raise concerns about how AI can perpetuate stereotypes and biases. AI models trained on preconceived notions of beauty standards and gender can reinforce existing stereotypes, potentially leading to biased outcomes in various domains, including hiring decisions. 

Also read | Explained | Lab-grown human embryos: Ethical conundrums and emerging frontiers

"Facial photographs are commonly available to potential employers, and those involved in hiring decisions self-declare a willingness to discriminate based on ideology. Members of the public may thus be aided by recognising what elements of their photographs could affect their chances of employment," write the researchers. The study's results also shed light on the implications of advanced technologies, such as AI, in reinforcing perceptions about specific demographics.

(With inputs from agencies)

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.