J. DeGol
Steg.AI,
United States
Keywords: Generative AI, Deepfake, Watermarking, Forensic Watermarking, Steganography, Deepfake Detection, Deepfake Poisoning
Summary:
A “Deepfake” is a portmanteau of “Deep Learning” and “Fake”. Deepfakes are images, videos, audio, etc. of people that are fake. Rapid advancements in machine learning and generative AI technology have led to incredibly realistic and compelling deepfakes, and deepfakes are quickly becoming cybersecurity and public safety issues. Some examples include the fake Joe Biden robocall in New Hampshire (a misinformation campaign) and the finance worker that paid out $25M after a video call with their deepfakes CFO (fraud). This presentation focuses on three approaches we have developed to combat malicious use of deepfakes: (1) deepfake detection, (2) deepfake poisoning, and (3) zero-trust security. Deepfake detection is an approach that uses machine learning to detect subtle traces in digital content that is left behind by generative AI. Deepfake poisoning is an approach where imperceptible changes are made to the digital content such that if it is used for deepfake creation, the outputs are spoiled. Lastly, zero-trust security is an approach where no content is trusted unless verified. The verification process relies on a combination of provenance storing metadata and forensic watermarking.