Buddhist orgs are issuing warnings about AI deepfake videos

"Part of the content is even contrary to the teachings of Buddha dharma and may mislead and cause confusion among practitioners," one of the statements says. The post Buddhist orgs are issuing warnings about AI deepfake videos appeared first...

Buddhist orgs are issuing warnings about AI deepfake videos

In the past week, two Buddhist communities have issued statements warning members to be on the lookout for illegitimate “deepfake” videos —AI-assisted videos that can convincingly make it appear that someone has said or done something when they haven’t— of their guiding teachers.

Perhaps you’ve seen the type: some familiar figure is suddenly on your screen. It looks real at a glance, but look closely, and you see, for example, that the words do not match up with the speaker’s mouth. And, of course, this person is strongly endorsing a product, when you know they’d never do such a thing. Or, maybe you’ve seen the type and have been fooled. With AI learning and improving itself all the time, we all will be fooled sooner or later. 

It’s been just two years since ChatGPT launched, putting artificial intelligence into the hands of the public and creating a boom of AI usage in the personal, organizational, and corporate realms. We’ve seen discussion and use of AI in the Buddhist world plenty already, with non-human priests created to share dharma teachings, or mindfulness being similarly taught, and wise minds from the community taking time to address AI’s potential dangers

Among those dangers is AI’s ever-increasing facility with learning from and copying real-life figures so that their personas may be used in unauthorized and unhelpful ways. And indeed, this is what two communities — Tergar Asia Foundation, part of Mingyur Rinpoche’s community, and Dongyu Gatsal Ling Nunnery in India, of which Jetsunma Tenzin Palmo is director — are now telling us has happened to their guiding figures. 

“It has come to our notice,” reads Tergar Asia’s statement, “that of the many AI-generated videos now shared on the internet, some of them appear to feature Mingyur Rinpoche discussing topics such as life and relationships in a way that is unrelated to his teachings on awareness, compassion and wisdom. Part of the content is even contrary to the teachings of Buddha dharma and may mislead and cause confusion among practitioners.”

Dongyu Gatsal Ling Nunnery is for Himalayan women in the Drukpa Kagyu Tradition of Tibetan Buddhism. In its statement titled “Regarding AI-Generated/Deepfake Videos,” DGLN likewise reported that “some unscrupulous sources have been using Jetsunma Tenzin Palmo’s likeness for endorsements and/or self-promotion.” 

The two Buddhist communities in question have shared good common-sense advice to consider when confronting what may be an AI/deepfake situation. Their tips boil down to:

Be certain of the source you’re looking at.  If you see false information, do what you can to flag it so that others encountering it will know to disregard it. “For instance,” writes DGLN, if you see a video where Jetsunma [Tenzin Palmo] is speaking in a foreign language (not English), saying something which is contrary to the values of the Buddhadharma, blatantly asking for money or giving an endorsement to a product or brand which is completely disconnected from Jetsunma’s own activities; take it as a clear red flag and be cautious about it. Report any suspected deepfake activity to the community of the teacher or figure whose image is being misappropriated.

For more on AI’s potential promise and pitfalls in the Buddhist world, read “What A.I. Means for Buddhism,” from Lion’s Roar magazine. 

Rod Meade Sperry. Photo by Megumi Yoshida, 2024

Rod Meade Sperry

Rod Meade Sperry is the editor of Buddhadharma: The Practitioner’s Guide (published by Lion’s Roar), and the book A Beginner’s Guide to Meditation: Practical Advice and Inspiration from Contemporary Buddhist Teachers. He lives in Halifax, Nova Scotia, with his partner and their tiny pup, Sid.