
A new study has found that humans successfully detect artificially generated speech only 73 per cent of the time.The same level of accuracy was found in English and Mandarin speakers while detecting artificially-generated speech.
A text-to-speech algorithm, which was trained on two publicly available datasets in English and Mandarin languages for generating 50 deepfake speech samples in each language, was used by the researchers at University College London.
Deepfake AI, which is one form of generative artificial intelligence, is a kind of synthetic media which is created to resemble the voice of a real person or their appearance.
The sound samples collected by the researchers were played for 529 participants to observe if they can detect the real sample from fake speech. However, the participants successfully identified the fake speech only 73 per cent of the time.
ALSO READ |Switzerland: Bronze age arrowhead traced to 'object that fell from the sky'
There was a slight improvement in the number after participants were given training to recognise different aspects of deepfake speech.
This is the first study to assess the human ability to detect artificially generated speech in languages other than English.
The first author of the study, Kimberly Mai said, “In our study, we showed that training people to detect deepfakes is not necessarily a reliable way to help them to get better at it. Unfortunately, our experiments also show that at the moment automated detectors are not reliable either."
“They’re really good at detecting deepfakes if they’ve seen similar examples during their training phase if the speaker is the same or the clips are recorded in a similar audio environment, for example. But they’re not reliable when there are changes in the test audio conditions, such as if there’s a different speaker," she added.
She stated that it was essential to improvise automated deepfake speech detectors and for organisations to “think about strategies to mitigate the threat that deepfake content poses”.
WATCH |Scientist claims alien tech found in 2014 meteor
Head of Engineering at Liverpool John Moores University, Dr Karl Jones issued a warning that the justice system of the UK is not designed in a way to protect against the use of deepfakes. “Deepfake speech is almost the perfect crime – because you don’t know that it’s been done,” he stated.
Executive director of Witness,an international nonprofit organisation, Sam Gregory said, "At Witness, we speak about a detection equity gap. The people who need the capacity to detect – journalists and factcheckers, and civil society and election officials – are the ones who don’t have access to these [detection] tools. This is a huge issue that is going to get worse if we don’t invest in those skills and resources. We may not need to have detection tools available to everyone, because that also makes them harder to be robust. But we need to think about the investment in supporting intermediaries.”
You cannow write forwionews.comand be a part of the community. Share your stories and opinions with ushere.
WATCH WION LIVE HERE