Community Notes, the crowdsourced fact-checking program launched a year ago to tackle disinformation on X (formerly Twitter), has flown under the radar for much of its existence. But in the past week it has been heralded by Elon Musk and X CEO Linda Yaccarino as a key tool in tackling the toxic deluge of disinformation that has flooded social media, and in particular X, during the ongoing Israel-Hamas war.
On Saturday, the company wrote on its own platform that “notes across the platform are now being seen tens of millions of times per day, generating north of 85 million impressions in the last week.” It added that thousands of new contributors had been enrolled in the system. However, a WIRED investigation found that Community Notes appears to be not functioning as designed, may be vulnerable to coordinated manipulation by outside groups, and lacks transparency about how notes are approved. Sources also claim that it is filled with in-fighting and disinformation, and there appears to be no real oversight from the company itself.
“I understand why they do it, but it doesn't do anything like what they say it does,” one Community Notes contributor tells WIRED. “It is prone to manipulation, and it is far too slow and cumbersome. It serves no purpose as far as I can see. I think it's probably making the disinformation worse, to be honest.”
This contributor, who was granted anonymity for this story because of concerns about retaliation from Musk or his supporters, should know about platform manipulation, because he is one of those engaging in it.
He is one member of a group of about 25 Community Notes contributors who actively coordinate on a daily basis in order to upvote or downvote particular notes. He is a member of the North Atlantic Fella Organization (NAFO), a decentralized online volunteer movement that emerged following Russia’s invasion of Ukraine and uses humor to counter Russian disinformation and propaganda while raising funds in support of Ukraine.
“We have a group so we can coordinate who writes what Community Notes, and if you write one, you can coordinate so you get one voted as helpful, and there's no validation on any of that,” the contributor says. The contributor claims that Russian embassies have engaged in similar tactics and alleges that they “will downvote anything that's anti-Russian.” A WIRED review of unpublished notes accompanying contested Russian embassy posts written by NAFO members shows they have all been repeatedly downvoted.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty GearHow Do You Solve a Problem Like Polestar?By Carlton Reid
GearA contributor based in the southern hemisphere tells WIRED that he has two accounts on X, and he alleges that both of them have been approved for access to the Community Notes system—evidence that the program may be vulnerable to even broader manipulation, just as Kremlin trolls were able to operate multiple Twitter accounts impersonating US citizens and influence the 2016 election.
Community Notes began life as Birdwatch, first announced in October 2020 before going live in a pilot project in January 2021, weeks after Donald Trump supporters attacked the United States Capitol. Envisioned as a way of harnessing the power of the crowd to debunk misinformation in near real time, Birdwatch was never fully embraced by the company, which was criticized in March last year, soon after the Russian invasion of Ukraine, for failing to roll it out more broadly.
At the time, there was trepidation within the company about how effective the tool would be at controlling the quality of information on the platform, Yoel Roth, Twitter's former head of Trust and Safety, tells WIRED.
“There was a belief amongst the Trust and Safety team that people would not do uncompensated labor at scale, and that the things that would attract attention in a Birdwatch format didn't necessarily correspond with the harmful things that constituted violations of policies,” says Roth.
This truth was later borne out, at least in part. A study presented at a conference in April 2022 by researchers at MIT found that Birdwatch users were much more likely to fact-check posts expressing political views that differ from their own. David Rand, the study’s lead researcher and a professor at MIT’s Sloan School of Management, says that while around 80 percent of the tweets that users chose to annotate were, in fact, misleading, users overwhelmingly tended to prioritize political content. Users were also likely to flag a piece of content based on whether it seemed “reasonable.” It is unclear whether these users had the knowledge and skills to debunk misinformation or offer alternative, correct information.
“In a situation like what is happening in Israel and Gaza right now, where there's a massive amount of unknowns, that gut-check of ‘Does this seem reasonable?’ is not great. Even professional journalists, as I understand it, are having a lot of trouble understanding what's true and what's not true,” says Rand. With such a fast-moving and volatile situation, “what I expect the crowd to produce is a lot of noise,” he says.
Almost immediately after Musk took control of the company in late October 2022, he renamed Birdwatch “Community Notes,” claiming that the system had “incredible potential for improving information accuracy.” In the months that followed, X systematically eradicated almost all of the teams and individuals overseeing content moderation and trust and safety at the company, putting even greater emphasis on the role of Community Notes.
Joe Benarroch, head of business operations at X, tells WIRED that the company has contributors across 44 countries, and while he would not give an exact number, he noted that more than 10,000 people had joined the program in the past week. Alongside contributors who volunteer for the program, some contributors are invited by X itself, WIRED has learned, according to one contributor we spoke to and a search of X, where contributors say X invited them to join.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty GearHow Do You Solve a Problem Like Polestar?By Carlton Reid
GearBenarroch did not address whether there is any form of moderation for Community Notes. And while Benarroch says “Community Notes were designed to tackle misinformation on X,” Roth says their original intention was not to perform that task alone.
“The intention of Birdwatch was always to be a complement to, rather than a replacement for, Twitter's other misinformation methods,” says Roth. “The company fired the entire curation team, without whom misinformation enforcement can't operate, even if the policies are still on the books,” he says, referring to the former employees who would create the collections of tweets for corrections or produce the labels to let users know if something was misinformative.
“Community Notes is an imperfect replacement for Trust and Safety staff,” says one former Twitter Trust and Safety employee who spoke to WIRED on the condition of anonymity for fear of retaliation. “You can’t just outsource that work to the community.”
While X claims Community Notes is having a significant impact on tackling disinformation on the platform, the notes system itself can also spread disinformation. Last week WIRED revealed that a note appended to a post from Donald Trump Jr. wrongly claimed a video he posted about the Hamas attack on Israel was fake. The original note has since been replaced with one citing WIRED’s report.
Many notes written by contributors are never seen by the public. A WIRED examination of these unpublished notes shows that just like the main X platform, the Community Notes community is similarly riven by in-fighting and misinformation.
On Saturday, the official Israel account on X posted a picture of what looks like a child’s bedroom with blood covering the floor. “This could be your child’s bedroom. No words,” the post reads. There is no suggestion the picture is fake, and publicly there are no notes on the post. However, in the Community Notes backend, viewed by WIRED, multiple contributors are engaging in a conspiracy-fueled back-and-forth.
“Deoxygenated blood has a shade of dark red, therefore this is staged,” one contributor wrote. “Post with manipulative intent that tries to create an emotional reaction in the reader by relating words and pictures in a decontextualized way,” another writes.
“There is no evidence that this picture is staged. A Wikipedia article about blood is not evidence that this is staged,” another contributor writes.
“There is no evidence this photo is from the October 7th attacks,” another claims.
These types of exchanges raise questions about how X approves contributors for the program, but this, along with precisely what factors are considered before each note is approved, remains unknown. X’s Benarroch did not respond to questions about how contributors are chosen.
None of those approved for the system are given any training, according to all contributors WIRED spoke to, and the only limitation placed on the contributors initially is an inability to write new notes until they have rated a number of other notes first. One contributor claims this approval process can take fewer than six hours.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty GearHow Do You Solve a Problem Like Polestar?By Carlton Reid
GearIn order for notes to become attached to a post publicly, they need to be approved as “helpful” by a certain number of contributors, though how many is unclear. X describes “helpful” notes as ones that get “enough contributors from different perspectives.” Benarroch did not say how X evaluates a user’s political leanings. However, the system at least previously employed a technique known as bridge-based ranking to favor notes that receive positive interactions from users estimated to hold differing viewpoints. Still, how this works is not clear to at least some Community Notes contributors.
“I don't see any mechanism by which they can know what perspective people hold,” Anna, a UK-based former journalist whom X invited to become a Community Notes contributor, tells WIRED. “I really don't see how that would work, to be honest, because new topics come up that one could not possibly have been rated on.” Anna asked only to be identified by her first name for fear of backlash from other X users.
For all the notes that do become public, there are many more that remain unseen, either because they are deemed unhelpful, or in the majority of cases reviewed by WIRED, they simply didn’t get enough votes from other contributors. One contributor tells WIRED that 503 notes he had rated in the last week remained in limbo because not enough people had voted on them.
“I think one of the issues with Community Notes at its core, it's not really scalable for the amount of media that's being consumed or posted in any given day,” the contributor, who is known online as Investigator515, tells WIRED. They asked to only be identified by their handle because of fears of damage to their professional reputation.
All of the contributors who spoke to WIRED feel that Community Notes is not up to the task of policing the platform for misinformation, and none of them believed that the program would improve at all in the coming months if it remains in its current form.
“It's much harder to deal with misinformation when there isn't the top-down moderation that Twitter used to have, because accounts willfully spreading misinformation would get suspended before they could really do a lot of harm,” the longtime contributor says. “So a reliance on Community Notes is not good. It's not a replacement for proper content moderation.”
Updated at 7:10 pm ET, October 17, 2023, to add further context about how the Community Notes system may gauge the viewpoint diversity of contributors.