The Two Faces of DeepFake Technology

The Two Faces of DeepFake Technology

I remember as a kid watching programs on television that had science fiction themes that included artificial intelligence  such as Star Trek, The Jetsons, Lost in Space, BattleStar Galactica, Terminator and other science fiction programs and how mesmerized I was about the future and potential possibility of Artificial Intelligence and that one day we would all be using some form of it in our daily lives.  Some of the programs portrayed AI in a positive light where artificial intelligence gained great attention, both for the good it could do and some portrayed AI negatively emphasizing the potential threats that could arise as a result of AI going rogue.  What I can say with certainty is that Artificial Intelligence (AI) is changing the way we do things globally.  

AI has evolved and is being used by major companies like IBM’s Watson, Google AI, Microsoft AI and Amazon.  Terms such as Machine Learning, Deep Learning, and AI are usually lumped under the AI, but they are different.  Bottom line there is a lot of good that can come out of AI; however, some are using AI for negative purposes.  

“DeepFake – it originates from deep learning and an attacker modifies an existing image or video by applying someone else’s face or facial elements (eyes) to the original image or video”  it is very different from image editing, as it is being used to make the person in the original video say something that they didn’t actually say.  

Because DeepFake can convincingly modify a human face in an image or video, this technology could, as recent reports and tests indicate, offer some positive uses, such as modifying your face in an online image or video chat, so that your true identity would remain masked, thereby affording you a level of protection against identity theft. What is even more impressive is that your newly modified “online face” would typically be an amalgamation of millions of facial features from other images and therefore not match any other person’s face. With facial recognition software running rampant on the Internet, this could also offer protection from that technology as well.

DeepFakes can, as previously noted, be used for notorious purposes such as revenge adult content, personal attacks, bullying, etc. Because the modified content can be made to look convincingly like original content, there are now privacy and legal disciplines dedicated to dealing with DeepFake technology and its negative consequences. From copyright infringement, privacy violations, and cyberbullying, to state-sponsored fake news and propaganda, the illicit use of AI-generated technologies is already happening and protections against the consequences of these activities are evolving to try to meet these new threats head-on. 

What else can be done to try to keep AI and its associate spin-off technologies from becoming more harmful than helpful? And provide examples of some other uses of DeepFake that you have come across?