Fact Check: X's Block on 'Taylor Swift' Searches Was Not Due to Her Politics

Raph_PH/Wikimedia Commons
Raph_PH/Wikimedia Commons


The reason X (formerly Twitter) temporarily blocked all searches for the name “Taylor Swift” in January 2024 was to prevent the spread of her political messaging.


Rating: False
Rating: False

On Jan. 29, 2024, viral posts claimed X had blocked all searches for Taylor Swift for political reasons, in particular because of her expression of support for U.S. President Joe Biden when he was just a political candidate in 2020.

One post shared an old photograph of Swift holding a plate of cookies with “Biden Harris 2020” written on them in blue and white icing and claimed in the caption, “BREAKING: Elon Musk is stopping individuals from searching 'Taylor Swift' out of fear they will find this pro-Biden image of her. Retweet to get the word out that Taylor is a Biden voter.”

(Screenshot via X)

The statement was incorrect. Although it's true that "Taylor Swift" searches were temporarily blocked on the platform, it was not because of her politics. The reason for the block was the sudden, rapid proliferation of explicit fake images of the singer, likely AI-generated. Swift’s fans (known as Swifties) flooded the platform with messages of protest, using phrases like “Protect Taylor Swift” alongside real images of her in order to drown out the fake content, according to news reports. The deepfakes kept spreading despite the company’s efforts to remove them, prompting the decision to block all searches for her name.

According to The New York Times, Reality Defender, a cybersecurity company focused on detecting AI, determined with 90 percent certainty that the images were generated through a diffusion model, an AI-driven technology. Publicly available AI tools have made it easy and cheap for internet users to create deepfakes.

As we’ve explained in our coverage, “deepfake” refers to an image or video that was created with the aid of artificial intelligence (AI). Snopes previously explained, “GANs (generative adversarial networks, a form of AI) can learn various characteristics of a person in order to synthesize an artificial version. By using this technology, one can create a video that appears to show a person (say, Tom Cruise) performing a stunt (or other action) even though they never did any such thing. This technology can also be used to fabricate an entirely fictional person.”

X released a statement on Jan. 25 reiterating its policy that sharing nonconsensual nudity on the platform is strictly prohibited. "We have a zero-tolerance policy towards such content," the statement said. "Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them."

On Jan. 30, 2024, a bipartisan group of senators introduced a bill called the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the “Defiance Act” to criminalize the spread of nonconsensual, sexualized images generated by artificial intelligence.

As of this writing, X re-enabled searches for Swift's name, but the singer had yet to comment on the controversy.


Conger, Kate, and John Yoon. “Explicit Deepfake Images of Taylor Swift Elude Safeguards and Swamp Social Media.” The New York Times, 26 Jan. 2024., Accessed 31 Jan. 2024.

Evon, Dan. “How to Spot a Deepfake.” Snopes, 8 June 2022, Accessed 31 Jan. 2024.

House, The White. “Press Briefing by Press Secretary Karine Jean-Pierre, NSC Coordinator for Strategic Communications John Kirby, and National Climate Advisor Ali Zaidi.” The White House, 26 Jan. 2024, Accessed 31 Jan. 2024.

Lodhi, Areesha. “X Blocks Taylor Swift Searches: What to Know about the Viral AI Deepfakes.” Al Jazeera, Accessed 31 Jan. 2024.

Montgomery, Blake. “Taylor Swift AI Images Prompt US Bill to Tackle Nonconsensual, Sexual Deepfakes.” The Guardian, 31 Jan. 2024. The Guardian, Accessed 31 Jan. 2024.

"X Blocks Searches for Taylor Swift after Explicit AI Images of Her Go Viral." BBC, 28 Jan. 2024., Accessed 31 Jan. 2024.