Neo-Nazis and white supremacists are sharing Hitler-related propaganda and trying to recruit new members on TikTok, according to a new report from the Institute for Strategic Dialogue (ISD) shared exclusively with WIRED. The TikTok algorithm is also promoting this content to new users, researchers found, as extremist communities are leveraging the huge popularity of TikTok among younger audiences to spread their message.
The report from ISD details how hundreds of extremist TikTok accounts are openly posting videos promoting Holocaust denial and the glorification of Hitler and Nazi-era Germany, and suggesting that Nazi ideology is a solution to modern-day issues such as the alleged migrant invasion of Western countries. The accounts also show support for white supremacist mass shooters and livestream-related footage or recreations of these massacres. Many of the accounts use Nazi symbols in their profile pictures or include white supremacist codes in their usernames.
Nathan Doctor, an ISD researcher who authored the report, says he began his investigation earlier this year when he came across one neo-Nazi account on TikTok while conducting research for another project.
He was quickly able to unmask a much broader network of accounts that appeared to be actively helping each other through liking, sharing, and commenting on each other’s accounts in order to increase their viewership and reach.
The groups promoting neo-Nazi narratives are typically siloed in more fringe platforms, like Telegram, the encrypted messaging app. But Telegram has become a place to discuss recruitment techniques for TikTok specifically: White supremacist groups there share videos, images, and audio tracks that members can use, explicitly telling other members to cross-post the content on TikTok.
“We posted stuff on our brand new tiktok account with 0 followers but had more views than you could ever have on bitchute or twitter,” one account in a Neo-Nazi group posted on Telegram about their outreach on TikTok. “It just reaches much more people.”
Others have followed suit. One prominent neo-Nazi has often asked his thousands of Telegram followers to “juice,” or algorithmically boost, his TikTok videos to increase their viral potential.
An extremist Telegram channel with 12,000 followers urged members to promote the neo-Nazi documentary Europa: The Last Battle by blanketing TikTok with reaction videos in an effort to make the film go viral. Researchers from ISD found dozens of videos on TikTok featuring clips from the film, some with over 100,000 views. “One account posting such snippets has received nearly 900k views on their videos, which include claims that the Rothschild family control the media and handpick presidents, as well as other false or antisemitic claims,” the researchers wrote.
This is far from the first time the role that TikTok’s algorithm plays in promoting extremist content has been exposed. Earlier this month, the Global Network on Extremism and Technology reported that TikTok’s algorithm was promoting the “adoration of minor fascist ideologues.” The same researchers found last year that it was boosting Eurocentric supremacist narratives in Southeast Asia. Earlier this month, WIRED reported how TikTok’s search suggestions were pushing young voters in Germany towards the far-right Alternative for Germany party ahead of last month’s EU elections.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
Gear“Hateful behavior, organizations and their ideologies have no place on TikTok, and we remove more than 98 percent of this content before it is reported to us,” Jamie Favazza, a TikTok spokesperson tells WIRED. “We work with experts to keep ahead of evolving trends and continually strengthen our safeguards against hateful ideologies and groups.”
Part of the reason platforms like TikTok have in the past been unable to effectively clamp down on extremist content is due to the use of code language, emojis, acronyms, and numbers by these groups. For example, many of the neo-Nazi accounts used a juice box emoji to refer to Jewish people.
“At present, self-identified Nazis are discussing TikTok as an amenable platform to spread their ideology, especially when employing a series of countermeasures to evade moderation and amplify content as a network,” the researchers write in the report.
But Doctor points out that even when viewing non-English-language content, spotting these patterns should be possible. “Despite seeing content in other languages, you can still pretty quickly recognize what it means,” says Doctor. “The coded nature of it isn't an excuse, because if it's pretty easily recognizable to someone in another language, it should be recognizable to TikTok as well.”
TikTok says it has more than “40,000 trust and safety professionals” working on moderation around the globe, and the company says its Trust and Safety Team has specialists in violent extremism who constantly monitor developments in these communities, including the use of new coded language.
While many of the identified accounts are based in the US, Doctor found that the network was also international.
“It's definitely global, it's not even just the English language,” Doctor tells WIRED. “We found stuff in French, Hungarian, German. Some of these are in countries where Naziism is illegal. Russian is a big one. But we even found things that were a bit surprising, like groups of Mexican Nazis, or across Latin America. So, yeah, definitely a global phenomenon.”
Doctor did not find any evidence that the international groups were actively coordinating with each other, but they were certainly aware of each others’ presence on TikTok: “These accounts are definitely engaging with each others' content. You can see, based on comment sections, European English-speaking pro-Nazi accounts reacting with praise toward Russian-language pro-Nazi content.”
The researchers also found that beyond individual accounts and groups promoting extremist content, some real-world fascist or far-right organizations were openly recruiting on the platform.
Accounts from these groups posted links in their TikTok videos to a website featuring antisemitic flyers and instructions on how to print and distribute them. They also boosted Telegram channels featuring more violent and explicitly extremist discourse.
In one example cited by ISD, an account whose username contains an antisemitic slur and whose bio calls for an armed revolution and the complete annihilation of Jewish people, has shared incomplete instructions to build improvised explosive devices, 3D-printed guns, and “napalm on a budget.”
To receive the complete instructions, the account holder urged followers to join a “secure groupchat” on encrypted messaging platforms Element and Tox. Doctor says that comments under the account holder’s videos indicate that a number of his followers had joined these chat groups.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearISD reported this account, along with 49 other accounts, in June for breaching TikTok’s policies on hate speech, encouragement of violence against protected groups, promoting hateful ideologies, celebrating violent extremists, and Holocaust denial. In all cases, TikTok found no violations, and all accounts were initially allowed to remain active.
A month later, 23 of the accounts had been banned by TikTok, indicating that the platform is at least removing some violative content and channels over time. Prior to being taken down, the 23 banned accounts had racked up at least 2 million views.
The researchers also created new TikTok accounts to understand how Nazi content is promoted to new users by TikTok’s powerful algorithm.
Using an account created at the end of May, researchers watched 10 videos from the network of pro-Nazi users, occasionally clicking on comment sections but stopping short of any form of real engagement such as liking, commenting, or bookmarking. The researchers also viewed 10 pro-Nazi accounts. When the researchers then flipped to the For You feed within the app, it took just three videos for the algorithm to suggest a video featuring a World War II-era Nazi soldier overlayed with a chart of US murder rates, with perpetrators broken down by race. Later, a video appeared of an AI-translated speech from Hitler overlaid with a recruitment poster for a white nationalist group.
Another account created by ISD researchers saw even more extremist content promoted in its main feed, with 70 percent of videos coming from self-identified Nazis or featuring Nazi propaganda. After the account followed a number of pro-Nazi accounts in order to access content on channels set to private, the TikTok algorithm also promoted other Nazi accounts to follow. All 10 of the first accounts recommended by TikTok to this account used Nazi symbology or keywords in their usernames or profile photos, or featured Nazi propaganda in their videos.
“In no way is this particularly surprising,” says Abbie Richards, a disinformation researcher specializing in TikTok. "These are things that we found time and time again. I have certainly found them in my research."
Richards wrote about white supremacist and militant accelerationist content on the platform in 2022, including the case of neo-Nazi Paul Miller, who, while serving a 41-month sentence for firearm charges, featured in a TikTok video that racked up more than 5 million views and 700,000 likes during the three months it was on the platform before being removed.
Marcus Bösch, a researcher based in Hamburg University who monitors TikTok, tells WIRED that the report’s findings “do not come as a big surprise,” and he’s not hopeful there is anything TikTok can do to fix the problem.
“I’m not sure exactly where the problem is,” Bösch says. “TikTok says it has around 40,000 content moderators, and it should be easy to understand such obvious policy violations. Yet due to the sheer volume [of content], and the ability by bad actors to quickly adapt, I am convinced that the entire disinformation problem cannot be finally solved, neither with AI nor with more moderators.”
TikTok says it has completed a mentorship program with Tech Against Terrorism, a group that seeks to disrupt terrorists’ online activity and helps TikTok identify online threats.
“Despite proactive steps taken, TikTok remains a target for exploitation by extremist groups as its popularity grows,” Adam Hadley, executive director of Tech Against Terrorism, tells WIRED. “The ISD study shows that a small number of violent extremists can wreak havoc on large platforms due to adversarial asymmetry. This report therefore underscores the need for cross-platform threat intelligence supported by improved AI-powered content moderation. The report also reminds us that Telegram should also be held accountable for its role in the online extremist ecosystem.”
As Hadley outlines, the report’s findings show that there are significant loopholes in the company’s current policies.
“I've always described TikTok, when it comes to far-right usage, as a messaging platform,” Richards said. “More than anything, it's just about repetition. It's about being exposed to the same hateful narrative over and over and over again, because at a certain point you start to believe things after you just see them enough, and they start to really influence your worldview.”