Deepfakes – a synthetic media implementing the use of computer vision and deep learning – manipulate content by projecting another person’s image on an existing image or video. Such technology calls into question the ethics of AI, as deepfakes are often used for deceptive purposes. In fact, the government and technology giants like Facebook has imposed limitations to restrict the use of deepfakes. This is because they are often used in celebrity pornographic videos, hoaxes, and fake news. According to Deeptrace, an AI firm, in September 2019, there were about 15000 deepfake videos online – 96% of which were pornographic.
How Deepfakes Work
With regards to visual deepfakes, artificial neural networks known as encoders are trained to recognise patterns in facial features. Images are feeded into these computer systems so that it can identify and construct patterns that form faces. In doing so, encoders compress these images. This allows people to swap faces by mapping a person’s face on another person’s face. Otherwise, facial manipulation is also possible – reconstructing the expressions of a person’s face on another person.
The decoder, another AI algorithm, is trained to recover the compressed images. To perform a face swap, there are two decoding algorithms: one that recovers Person A’s face and another one that recovers Person B’s face. Feeding the compressed images of Person B’s face into the first decoder will construct the face of Person A’s face with Person B’s facial expressions. Each image frame is then put together to form a convincing video.
Spotting a Deepfake
Audio Deepfakeshlogy is rapidly improving, it has yet to imitate humans perfectly. A deepfake video would often have inconsistent eye blinking, strange lighting effects, and imperfect features, such as hair strands that may appear unnatural. Poorer quality videos may have patchy skintones and bad lip syncing. This makes deepfakes seem unconvincing.
Audio deepfakes are more commonly known as voice skins or voice clones. They mimic a person’s voice in an extremely believable way. Hence, audio deepfakes have often been used for fraudulent purposes and scams.
As previously mentioned, deepfakes have been used primarily for harm and exploitation through pornographic videos – with disproportionate number of victims being women – and disinformation. Still, there are many useful applications. For example, deepfakes were used to show characters in their youth and replace dead characters in a few Star Wars movies. Similarly, it has been increasingly used in the e-commerce industry whereby customers can try on clothing virtually.
Written by Nichapatr (Petch) Lomtakul