New Delhi, India
The Indian government on Tuesday (Dec 26) issued a directive to social media companies to curb deepfake content on their platforms.
The Ministry of Electronics and Information Technology (MeitY) asked Facebook, Instagram, X and others to remove AI-generated deceptive videos and comply with Information Technology (IT) Rules.
Deepfake technology allows anyone to be impersonated using their images. The government’s advisory stated clearly that the platforms need to increase awareness about the prohibited content among their users.
Misinformation represents a deep threat to the safety and trust of users on the Internet.
➡️ #Deepfake which is misinformation powered by #AI, further amplifies the threat to safety and trust of our #DigitalNagriks.
➡️ On 17th November, PM @narendramodi ji alerted the country… pic.twitter.com/QM38gPOt7O
— Rajeev Chandrasekhar ?? (@Rajeev_GoI) December 26, 2023
As per Indian IT rules, social media platforms must ensure that activities that cause 11 listed harms are not carried out.
These 11 listed user harms include:
1. Threats to national security or unity, defence of India, friendly relations with foreign states, or public order
2. Child pornography
3. Obscenity or vulgarity
4. Disinformation or false or misleading information
5. Insulting or harassing on the basis of gender, religion, race, etc
6. Personal information without consent
7. Impersonation of another person
8. Commercial fraud or deception
9. Deception or cheating in online games
10. Software viruses or malware
11. Any other unlawful activity
Deepfake menace
The year 2023 witnessed rapid advancements in the development of the Artificial Intelligence (AI) arena, which has led to the widespread deepfake menace on social media.
Also read: Explained | What are deepfakes and how to spot them?
Deepfakes of several Indian actresses, including Rashmika Mandanna, Kajol, Alia Bhatt, Priyanka Chopra, Katrina Kaif and even business tycoon Ratan Tata and more have surfaced on the internet in the past couple of months.
Ripple CEO deepfake controversy
One such major deepfake controversy emerged last week when a fake video featuring the CEO of US-based crypto solutions provider Ripple, Brad Garlinghouse, emerged on YouTube.
The video, which appeared real, showed the CEO asking people to invest in a fraudulent crypto scheme.
Watch: 1 lakh fine, 3 years jail for creating deepfakes
When Google was made aware of this deepfake scam, the social media giant failed to take down the video immediately.
Words of caution from Indian PM
Indian PM Narendra Modi, who himself has been a victim of AI deepfake on the Indian internet, also highlighted the need to regulate AI and be alert while using new technologies.
"We have to be careful with new technology. If these are used carefully, they can be very useful. However, if these are misused, it can create huge problems. You must be aware of deepfake videos made with the help of generative AI," he said while speaking at an event.
(With inputs from agencies)