Experts Fear AI Tools Could Worsen Nonconsensual Deepfake Pornography

Mon Apr 17 2023
icon-facebook icon-twitter icon-whatsapp

NEW YORK: A wide range of tasks can be accomplished with the aid of artificial intelligence (AI) tools, including designing ad campaigns for e-commerce, creating fascinating art, and speeding up a business workflow. However, experts caution that nonconsensual deepfake pornography is one of their more dangerous side effects and is more harmful to women than men. 

 

What are deepfakes?

 

Deepfakes are videos and images digitally manipulated or created using artificial intelligence (AI) or machine learning. Numerous websites host pornography made with this technology, which raises serious ethical and legal issues. This type of pornography frequently targets online influencers, journalists, and people with a public profile. 

 

Some websites allow users to design their own images, effectively enabling anyone to convert any person into a sexual object against their will or to use technology to hurt ex-partners.

 

Experts have warned that the issue could get worse as generative AI tools are developed that use existing data to create new content, making it more challenging to tell deepfake from the actual content. The issue is that countries have laws and regulations for such content, and users who create such images may be halfway around the world. This raises the question of how well-equipped AI models and online platforms are to limit access to explicit content.

 

AI models and online platforms trying to control explicit content

 

Despite the difficulties, some AI models have implemented controls to limit access to pornographic images. For instance, OpenAI has limited users’ ability to produce such images by removing explicit content from the data used to train the image-generating program DALL-E. 

 

Stable Diffusion, an image generator developed by the startup Stability AI, has updated its policies to stop users from using its technology to produce explicit images.

 

TikTok, Apple and Google increase efforts to curb deepfakes

 

Social media companies have also increased their efforts to safeguard their platforms from harmful content. For example, TikTok now mandates that all deepfakes or manipulated content depicting realistic scenes display a label indicating that it is a fake or has been altered somehow. Apple and Google have also banned deepfakes from their platforms.

 

Significant moral and legal issues are raised by the prevalence of deepfake pornography. However, the problem persists, and the fight against nonconsensual deepfake pornography is ongoing, even if certain AI models and online platforms are taking steps to limit access to graphic content.

icon-facebook icon-twitter icon-whatsapp