New Delhi, India

A 22-year-old man from Long Island, US, was sentenced this week to six months behind bars for posting deepfake photos of underage females on porn sites. More than shock, this news came as a matter of grave concern as it gives a glimpse of what the future may look like for all genders, especially for women. It also shows the darker side of the artificial intelligence (AI) revolution, which is hottest tech trend at this moment. 

Advertisment

What is artificial intelligence (AI)? 

To understand deepfakes, AI imaging, etc, let's first understand what artificial intelligence is.

IBM defines AI as a field which combines computer science and robust datasets, in order to enable problem-solving and also encompasses sub-fields of machine learning and deep learning. AI algorithms are used in these disciplines to create expert systems that make predictions or classifications based on input data. 

Advertisment

Recently, we saw and applauded several stunning AI images of places, nations, leaders, and celebrities, showing them in entirely different avatars. But when a viral AI-generated photo of the Pope in a puffy jacket, created by AI-based image generator Midjourney, looked amusing to some, we should also remember that the same technology can be used to picturise someone without clothes. 

What is deepfake? 

Deepfakes are produced by using deep learning AI to create synthetic media that have been digitally manipulated. They are aimed to replace one person's likeness convincingly with that of another, fake one. 'Innocent' minds may come across or use it in the form of mimicry, mockery, or other fun apps to play with. But, the same algorithm is used for non-consensual deepfake pornography, revenge porn, and child porn. 

Advertisment

Deepfake pornography

Experts had raised concerns when porn created using the technology first started to spread across the internet several years ago. It began when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors. 

Similar images, as well as videos, have since been circulated by deepfake creators, mostly aimed at online influencers, journalists, and others with a public profile. Thousands of videos can be found on a variety of websites. And some apps have allowed users to create their own images, essentially allowing anyone to turn anyone into sexual fantasies without their consent, or to use the technology to harm former partners. 

If we analyse the problem over which the concerns have been raised by experts, the main issue is the ease and simplicity available at one's fingertips to make sophisticated and visually compelling deepfakes. Experts also believe that the development of generative AI tools, will exacerbate the existing issue. 

As quoted by The Associated Press, Adam Dodge, the founder of EndTAB, a group that creates awareness on technology-enabled abuse, said: "The reality is that the technology will continue to proliferate, develop and become sort of as easy as pushing the button. And as long as that happens, people will undoubtedly... continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images." 

How to stop it? 

Amid growing concerns, some AI models claim to be restricting access to explicit images. OpenAI, the company behind the wildly popular ChatGPT, said it removed explicit content from data used to train the image-generating tool DALL-E, which limits the ability of users to create those types of images. OpenAI says it also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians.

Midjourney, which is another AI model, blocks the use of certain keywords. It also promotes users to flag problematic images to moderators. 

When reports emerged that some users made celebrity-inspired nude pictures using the image generator Stable Diffusion, developed by the startup Stability AI, the company issued an update in November that removed the ability to create explicit images using the technology. 

Social media companies such as TikTok have come up with new guidelines to better protect the platforms. Other companies, where the use of deepfake is prominent, have introduced new rules against harmful materials and content. 

Meta, and also some adult sites such as OnlyFans and Pornhub, began participating in Take It Down, an online tool that allows teens to report explicit images and videos of themselves on the internet. 

As quoted by AP, Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children which operates the Take It Down tool, said: "When people ask our senior leadership what are the boulders coming down the hill that we're worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes." 

Portnoy said, "We have not ... been able to formulate a direct response yet to it." 

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.