If anyone can rally up a base, it’s Taylor Swift.
When sexually explicit, likely AI-generated, fake images of Swift circulated on social media this week, it galvanized her fans. Swifties found phrases and hashtags related to the images and flooded them with videos and photos of Swift performing. “Protect Taylor Swift” went viral, trending as Swifties spoke out against not just the Swift deepfakes, but all nonconsensual, explicit images made of women.
Swift, arguably the most famous woman in the world right now, has become the high-profile victim of an all-too-frequent form of harassment. She has yet to comment on the photos publicly, but her status gives her power to wield in a situation where so many women have been left with little recourse. Deepfake porn is becoming more common as generative artificial intelligence gets better: 113,000 deepfake videos were uploaded to the most popular porn websites in the first nine months of 2023, a significant increase to the 73,000 videos uploaded throughout 2022. In 2019, research from a startup found that 96 percent of deepfakes on the internet were pornographic.
The content is easy to find on search engines and social media, and has affected other female celebrities and teenagers. Yet, many people don’t understand the full extent of the problem or its impact. Swift, and the media mania around her, has the potential to change that.
“It does feel like this could be one of those trigger events” that could lead to legal and societal changes around nonconsensual deepfakes, says Sam Gregory, executive director of Witness, a nonprofit organization focused on using images and videos for protecting human rights. But Gregory says people still don’t understand how common deepfake porn is, and how harmful and violating it can be to victims.
If anything, this deepfake disaster is reminiscent of the 2014 iCloud leak that led to nude photos of celebrities like Jennifer Lawrence and Kate Upton spreading online, prompting calls for greater protections on people's digital identities. Apple ultimately ramped up security features.
A handful of states have laws around nonconsensual deepfakes, and there are moves to ban it on the federal level, too. Rep. Joseph Morelle (D-New York) has introduced a bill in Congress that would make it illegal to create and share deepfake porn without a person’s consent. Another House bill from Rep. Yvette Clarke (D-New York) seeks to give legal recourse to victims of deepfake porn. Rep. Tom Kean, Jr. (R-New Jersey), who in November introduced a bill that would require the labeling of AI content, used the viral Swift moment to draw attention to his efforts: “Whether the victim is Taylor Swift or any young person across our country—we need to establish safeguards to combat this alarming trend,” Kean said in a statement.
This isn’t the first time that Swift or Swifties have tried to hold platforms and people accountable. In 2017, Swift won a lawsuit she brought against a radio DJ who she claimed groped her during a meet-and-greet. She was awarded $1—the amount she sued for, and what her attorney Douglas Baldridge called a symbolic sum “the value of which is immeasurable to all women in this situation.”
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearLast fall, tens of thousands of people registered to vote after the superstar posted a link to Vote.org on Instagram. And in 2022, her fan base, so enraged after waiting hours to buy tickets to the Eras Tour only to be beaten out by bots, reignited conversation around antitrust issues with Ticketmaster and Live Nation’s mega-merger. A cringy Senate hearing followed, and an investigation into Live Nation’s agreements with venues and artists is ongoing.
Swift and her fans could advocate for legal changes at the federal level to pass. But their outrage could do something else: lead platforms to take notice. “When you have a really massive group of users saying this content is unacceptable in this very high-profile way, the power there is about what it says to the platform about what users will and won’t tolerate,” says Cailin O’Connor, a professor of philosophy at University of California, Irvine and coauthor of The Misinformation Age: How False Beliefs Spread. X did not respond to a request for comment on the images and its moderation efforts regarding deepfake porn. Elon Musk bought the site in 2022 and quickly gutted its moderation teams. Advertisers have also dropped off recently after Musk’s apparent endorsement of an antisemitic conspiracy theory.
It’s not clear whether Swift will take on this issue. A representative for Swift did not respond to a request for comment for this story. Harassment of female celebrities is frequent and often brushed aside, but deepfakes are harming them and others without the same power. This could be a moment for Swift to use her powerful platform—or at least for her fans to push the issue before the public.