New tool helps you fight back against deepfakes
As the fight against deepfakes heats up, one company is helping us fight back. Hugging Face, a company that hosts […] The post New tool helps you fight back against deepfakes appeared first on ReadWrite.
As the fight against deepfakes heats up, one company is helping us fight back. Hugging Face, a company that hosts AI projects and machine learning tools has developed a range of “state of the art technology” to combat “the rise of AI-generated ‘fake’ human content” like deepfakes and voice scams.
This range of technology includes a collection of tools labeled ‘Provenance, Watermarking and Deepfake Detection.’ There are tools that not only detect deepfakes but also help by embedding watermarks in audio files, LLMs, and images.
Introducing Hugging Face
Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, announced the tools in a lengthy Twitter thread, where she broke down how each of these different tools works. The audio watermarking tool, for instance, works by embedding an “imperceptible signal that can be used to identify synthetic voices as fake,” while the image “poisoning” tool works by “disrupt[ing] the ability to create facial recognition models.”
Furthermore, the image “guarding” tool, Photoguard, works by making an image “immune” to direct editing by generative models. There are also tools like Fawkes, which work by limiting the use of facial recognition software on pictures that are accessible publicly, and numerous embedding tools that work by embedding watermarks that can be detected by specific software. Such embedding tools include Imatag, WaveMark, and Truepic.
With the rise of AI-generated "fake" human content–"deepfake" imagery, voice cloning scams & chatbot babble plagiarism–those of us working on social impact @huggingface put together a collection of some of the state-of-the-art technology that can help:https://t.co/nFS7GW8dtk
— MMitchell (@mmitchell_ai) February 12, 2024
While these tools are certainly a good start, Mashable tech reporter Cecily Mauran warned there might be some limitations. “Adding watermarks to media created by generative AI is becoming critical for the protection of creative works and the identification of misleading information, but it’s not foolproof,” she explains in an article for the outlet. “Watermarks embedded within metadata are often automatically removed when uploaded to third-party sites like social media, and nefarious users can find workarounds by taking a screenshot of a watermarked image.”
“Nonetheless,” she adds, “free and available tools like the ones Hugging Face shared are way better than nothing.”
Featured Image: Photo by Vishnu Mohanan on Unsplash
Charlotte Colombo
Freelance Journalist
Charlotte Colombo is a freelance journalist with bylines in Metro.co.uk, Radio Times, The Independent, Daily Dot, Glamour, Stylist, and VICE among others. She most recently worked as a Staff Writer for entertainment outlet The Digital Fix for two years and, prior to that, worked with Business Insider and Dexerto on their digital culture desks. She’s also appeared on BBC Radio 5 and The Guardian podcast to share her expertise on technology, influencers, and niche internet subcultures. She holds an MA in Magazine Journalism from City, University of London and has been freelancing for three years. She has a wide range of specialties including technology, digital culture, entertainment, lifestyle, and neurodiversity.'