Protecting Taylor Swift's Image: How X is Putting a Stop to the Spread of Deepfake Sexually Explicit Content
Jan 29, 2024
Elon Musk's social media platform X has implemented a search block on Taylor Swift due to the circulation of sexually explicit deepfake images featuring the singer on the internet.
Similar topics for you...This topic continues below.
Kristin Cavallari Says 'Mark Needs to Experience Life' and Reveals Reason for Their Breakup
Hollywood Legend Daniel Day-Lewis Returns to the Acting in Anemone Directed by His Son
Olivia Rodrigo Delights Singapore Fans with Emotional Concert and Sweet Outing with Boyfriend Louis Patridge
Whenever users attempt to search for Taylor Swift's name on the platform, they encounter an error message along with a prompt to retry their search, reassuring them that the issue is not their fault.
Even searches for alternative versions of her name, like "taylorswift" and "Taylor Swift AI," give the same error messages.
Last week, Sexually explicit and abusive fake images of Taylor Swift began circulating extensively on X, making her the most prominent target of a problem that tech platforms and anti-abuse organizations have been struggling to address.
The creation of AI-generated images involves the use of artificial intelligence system, which relies on written prompts as input. These images can be generated without the need for consent from the persons involved. Users on the platform have expressed concerns about the potential misuse of AI to circulate deceptive images, thereby infringing upon the privacy of the subjects. In response, some people are actively reporting such posts, while others are attempting to divert attention away from this issue, making it less prominent in discussions.
Joe Benarroch, head of business operations at X, said in a statement to various news outlets that, This is a temporary measure taken with utmost caution as we prioritize safety regarding this matter.
Following the online spread of these images, Swift's dedicated fanbase, known as "Swifties," swiftly mobilized and launched a counteroffensive on X. They also started a hashtag campaign, #ProtectTaylorSwift, to inundate the platform with positive images of the pop star. Some individuals even reported accounts that were sharing the deepfakes.
Reality Defender, a group specializing in detecting deepfakes, reported a flood of nonconsensual pornographic material featuring Swift, particularly on X. Some of these images also made their way onto Facebook, which is owned by Meta, as well as other social media platforms.
Researchers discovered several dozen unique AI-generated images, with the most widely shared ones being football-related. These images depicted a painted or bloodied Swift, objectifying her and, in some instances, inflicting violent harm on her deepfake persona.
According to researchers, there has been a significant increase in the prevalence of explicit deepfakes in recent years. This can be attributed to the advancements in technology, which have made it more accessible and user-friendly to create such manipulated images.
A report published in 2019 by DeepTrace Labs, an AI firm, highlighted that these deepfakes were predominantly utilized as a means of targeting women. The report further revealed that the majority of victims were Hollywood actors and South Korean K-pop singers.