London
AI-generated human avatars can deliver your scripts with perfect diction and look so natural. But what if the videos are used to support repressive regimes and criminal enterprises? This is what happened with avatars generated through the AI app Syenthesia, and the actual humans used for creating the AI models are now speaking up.
According to a report in The Guardian, the technology of the London-based startup that recently achieved unicorn status (market value of $1 billion) was deployed to support Ibrahim Traoré, who led the military coup of 2023 in Burkina Faso.
Well-groomed AI anchors were used in the videos that appeared on platforms like Telegram, asking people to rally behind the junta leader-turned-president.
Watch: Risks and dangers of artificial intelligence
Synthesia mainly is intended for creating marketing videos. But the deepfakes were used for generating propaganda videos on the platform, violating its own terms of service.
The actors, on whose body language the AI models were trained, are feeling betrayed and their mental health impacted due to negative association.
Also read: OpenAI says cybercriminals using ChatGPT to influence US elections
Synthesia, which claims it has improved its content moderation, was belied as The Guardian was still able to generate videos by giving it controversial scripts.
The Synthesia videos were used in misinformation by states including Russia and China.
Two pro-Venezuela videos featuring fake news segments presented by Synthesia avatars appeared on YouTube and Facebook. One fake anchor condemned “western media claims” of economic instability and poverty in Venezuela.
In one video, an AI avatar said he is the chief executive of a cryptocurrency platform.
Mark Torres, the actor who was the model behind the Burkina Faso video, told The Guardian he felt violated and vulnerable to see his image coopted.
“I’m in shock, there are no words right now....I have never felt so violated and vulnerable,” The Guardian quoted Torres as saying.
“I don’t want anyone viewing me like that. Just the fact that my image is out there, could be saying anything...People will think I am involved in the coup,” added Torres.
“Knowing that this company I trusted my image with will get away with such a thing makes me so angry. This could potentially cost lives, cost me my life when crossing a border for immigration.”
"It’s not me, it’s just my face. But they’ll think I’ve agreed to it,” said Dan Dewhirst, who acted in creating the original anchor that was used in the Venezuela video.
“Countless people contacted me about it … But there were probably other people who saw it and didn’t say anything, or quietly judged me for it."
“I was furious. It was really, really damaging to my mental health," he said, adding it caused an overwhelming amount of anxiety.
The Guardian also quoted an unidentified former Synthesia employee, who said that the AI cannot unlearn once its trained to create a model based on a real person's body language.
That would require the complete removal of the AI model, he said.
Synthesia told the paper that it does not allow stock avatars for political content, including 'content that is factually accurate but may create polarisation'.
Founded in 2017 by a team including Victor Riparbelli and Steffen Tjerrild, Synthesia's clients include Microsoft, Zoom, Xerox and Ernst & Young.
(With inputs from agencies)