Unmasking Deepfakes: Can Humans Spot Artificial Speech?

21.08.2023 05:20
Updated: 13.05.2024 21:24

Detecting deepfakes isn't as easy as it seems, according to a recent PLOS ONE study. 

This unique research explores humans' ability to identify artificially generated speech in languages beyond English, a first-of-its-kind investigation.

Let's find out more!

New AI algorithms are harder to spot

Deepfakes, powered by generative AI, replicate real voices and appearances using algorithms trained on datasets. 

Modern algorithms can recreate voices with just a three-second clip, or even 15 minutes using Apple's software.

robot
Photo:Pixabay

UCL researchers crafted 50 deepfake speech samples in English and Mandarin, testing participants' detection skills. 

Astonishingly, subjects could only spot fake speech with 73% accuracy, and training offered only slight improvement.

Why is it important?

Lead author Kimberly Mai highlights the need for advanced detection systems to counter increasingly sophisticated deepfake tech. 

While generative AI benefits speech accessibility, concerns about misuse by criminals and states urge strategic countermeasures.

This study sheds light on the evolving landscape of deepfake technology, calling for a balance between its potential and possible risks.
 

Kate Yakimchuk Author: Kate Yakimchuk Editor internet resource


Content
  1. New AI algorithms are harder to spot
  2. Why is it important?