In 2024, more than 50 countries around the world will vote in national elections, making it the largest election year in history. But as activists, legislators, and journalists have raised alarms about the potential for election-related misinformation, disinformation, and violence, many platforms have laid off the very workers and teams responsible for keeping social media safe.
Meta, in particular, with some 3 billion users across WhatsApp, Instagram, and Facebook, is a uniquely powerful force in shaping the global information ecosystem. In 2016, the platform took center stage for its seemingly central role in propelling Donald Trump into the White House. Sensitive to criticism that it had not done enough to protect American democracy, Meta invested in new tools and processes to try and keep election-related misinformation and disinformation off its platforms during the 2020 presidential election. But once the race was over, reporting from OneZero at the time found that Stop the Steal groups continued to balloon in the weeks after the 2020 election. The company rolled back many of these new mitigating strategies, allowing narratives that questioned the legitimacy of Joe Biden’s win to circulate in the lead-up to the Capitol insurrection on January 6, 2021. And despite the violence on January 6, Meta has continued to allow ads that question the results of the 2020 US election.
Facebook has also been widely credited with facilitating the rise of former Philippines president Rodrigo Duterte, allowing trolls targeting journalists, activists, and opposition lawmakers to flourish. In the lead-up to Sri Lanka’s 2020 elections, Meta allowed politicians to promote content that had already been flagged as false by fact-checkers, including posts that incited violence against the country’s Muslim minority.
“Social media platforms need to learn from past mistakes to be able to address them better this year,” Pamela San Martín, a member of Meta’s Oversight Board, tells WIRED. “We’re convinced that social media [companies] have to do their part to meet this huge global challenges and what's happening around the world in these elections.” The Meta Oversight Board is an independent body charged with making judgments about certain content moderation decisions. The board is funded via an independent trust set up by the company.
Over the past eighteen months, Meta has joined the ranks of several Big Tech companies that have slashed their staff. Trust and safety, the teams that keep mis- and disinformation and hate speech off the platforms, have been particularly hard hit. “Protecting the 2024 elections is one of our top priorities, and we have around 40,000 people globally working on safety and security—more than we had during the 2020 cycle,” Meta spokesperson Corey Chambliss told WIRED. “Our integrity efforts continue to lead the industry, and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats.”
In an interview with WIRED, San Martín spoke about preparing the company for a critical year for democracy, the board’s first emergency judgment, and the weaponization of social platforms.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThis interview has been edited and condensed for length and clarity.
WIRED: What are your biggest concerns around the US elections this year? Do you feel like Meta has learned from 2016 and 2020?
Pamela San Martín: No election is exactly the same as the previous one. So even though we're addressing the problems that arose in prior elections as a starting point, it is not enough. I think one of the things that has to be taken very, very seriously into account is, if we think of what happened in the US election and then we think of what happened in the Brazilian election, we’ve seen an advance in Meta using more tools to address election-related issues. [Following both the US and Brazilian elections, far-right groups, encouraged by disinformation that cast doubt about the outcome of the vote, stormed the legislatures in both countries.]
But between the US election [in 2020] to the Brazilian election [in 2022], Meta had not done enough to address the potential misuse of its platforms through coordinated campaigns, people organizing, or using bots on the platforms to convey a message to destabilize a country, to create a lack of trust or confidence on electoral processes. I think that is something that Meta has to pay specific attention to in the US—how its platforms can be abused through influential users who try to take advantage of its loopholes.
WIRED: How did the Capitol riot on January 6 change the way the company thought about its approach to elections or thought about the stakes of these moments?
One of the things the board told Meta was that addressing these coordinated campaigns needed to be part of its election integrity measures. Lessons need to be learned not only from January 6, but from the different election cycles Meta has gone through in different countries.
In the Trump case, we recommended that Meta conduct a human rights impact assessment regarding how its algorithms play a role in amplifying the violent narratives that had been part of what led to January 6. And Meta decided not to accept that recommendation. But that is one of the things they need to take into account—how their own algorithms, their own newsfeeds, their own recommendation systems, their own political ads can play a part in the protection or the disruption of electoral processes. [Meta spokesperson Corey Chambliss referred WIRED to an earlier statement in which the company committed to allowing academic research into its impact on the US 2020 elections, but did not commit to a human rights audit.]
WIRED: The US campaign season is unusually long in comparison to other countries. How does this make things more difficult for Meta?
I think the issues that can impact an election change throughout time. To adequately address, prevent, and mitigate the adverse outcomes, there has to be real-time monitoring. And of course, when you have longer electoral campaigns, real-time monitoring takes longer. And at the same time, when we will have tons of other elections throughout the world, I think we should not see the US in isolation. I think that the fact that US campaigns are longer will of course have an impact, because Meta has to monitor the elections for a longer time and it’ll have to stretch the use of different tools and resources. But it can’t be an either-or equation. Meta has to address the different issues that arise in all the different elections—including the US, but not only the US.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearWIRED: How much of the board’s selections of cases over the past year have been with an eye toward the elections in 2024? For instance, the Cambodia case, where the board recommended that former prime minister Hun Sen’s account be suspended for threatening violence against his political opponents, and the case concerning a doctored video of President Joe Biden that makes it appear he’s inappropriately touching his granddaughter.
A lot of the decisions and the case selection were geared toward trying to provide Meta with more input as to how to deal with these elections or with the type of context that can become crisis contexts very, very easily.
We of course have addressed some that are election related, and that we expect will help improve social media content moderation around elections. This includes publishing the election integrity metrics, showing that “newsworthiness” cannot be an excuse to promote violence or to silence the opposition, stressing that there should be serious consequences when you have political leaders that repeatedly break the rules. A lot of issues that are showing up around elections are not necessarily new—online disinformation, malicious manipulated media, political intimidation and incitement to violence, among a huge amount of other problems. Social media platforms need to learn from past mistakes to be able to address them better this year.
WIRED: In August, the board advised Meta to remove the page of Cambodia’s then prime minister, Hun Sen, after he threatened violence against his political opponents. Meta, however, chose to leave his page up. Are you worried about what that means for other leaders in a year that could be marked by contentious elections?
The reason why we recommended that Meta suspend Hun Sen was not because he was violent in himself. We have many violent leaders throughout the world, in very different regions and in very different countries. We asked for his suspension because he was using the platform to increase the violence against his opponents, to increase the threats, to incite violence against his opponents. It is the weaponization of the platform.
I think that Meta’s decision sends the wrong message to other political leaders, especially in countries that will face elections this year. It’s hard to imagine a clearer case of a political leader weaponizing social media to amplify threats, to silence his opposition, to intimidate the political opposition. And what we were seeking with the recommendations in this case was for Meta to have clear guidance to set out regarding the process it would adopt to deter public figures from exploiting its platforms to threaten and silence political opposition and incite violence. This was a lost opportunity.
WIRED: What do you see as the implications of that decision, in terms of what Meta considers violence or civil unrest?
This is something that can occur in different countries around the world, not only in countries that can be considered to have “lower democratic standards.” That’s why when you have a leader using the platforms to increase threats, to weaponize the platform, Meta should take steps to deter that political leader from using these platforms in that way.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearWhat we asked for was for Meta to clarify that its policy restricting account for public figures should apply not only in contexts where we have incidents of civil unrest or incidents of violence, but also where political expression is preemptively suppressed or responded to with violence or threat of violence from the state using Meta’s platforms. The question is, what should we consider civil unrest? Civil unrest has to be an incident—an isolated incident of violence, or an ongoing incident of violence. When you have violence that preemptively suppresses political opposition, political discourse, through the use of Meta’s platforms, should that also be considered civil unrest? For the board, it should have been considered civil unrest.
WIRED: We saw the board deal with its first emergency decisions around the Israel–Hamas conflict late last year. The case dealt with posts that had been improperly removed from Meta’s platforms for violating its policies, but the board felt they were important for the public to understand the conflict. Do you anticipate that this is a mechanism the board may need to rely on to render judgments in time spans that can have a meaningful effect on the democratic process?
I think that the exercise we had with the Israel–Hamas conflict was successful, and I expect us to use it again this year, maybe in election-related issues. And I say “maybe,” because when you are trying to protect elections, when you’re trying to protect democratic processes, it is something that you have to prepare ahead of time. The reason why we, for example, asked Meta to establish what its election integrity efforts would be, and what they expected to achieve with those, is because you need planning to establish the different measures to address what can result from the elections. There, of course, can be things that have to be addressed at a specific moment.
But Meta, for example, when they prepare for elections, when they establish what they call the EPOC, the Election Operations Center, they establish it with enough time for them to be able to implement the measures that will be adopted throughout the election. We expect Meta to prepare correctly if there is a need to take an expedited decision. We do expect Meta to take the steps preemptively, not to wait until we have a decision that has to be addressed.
WIRED: We’ve seen a lot of layoffs across the sector, and many of the people who were in charge of election efforts at Meta have been laid off in the past year. Do you have concerns about the company’s preparedness for such a major year for democracy, particularly given their track record in the past?
A context in which you have huge layoffs is something of a concern. It can’t just be the countries with the most users or that generate the most revenue that get prioritized. We still have problems with inadequate staffing, the underinvested countries, many of which will have elections this year. We are living through a worldwide democratic backlash. And in that context Meta has a heightened responsibility, especially in the global south, where its track record has been poor in living up to these expectations.
I acknowledge that Meta has already set up, or knows how to set up, different risk evaluation and mitigation measures that can be applied to elections. Meta has also used election-specific initiatives in different countries—for example, working with electoral authorities, adding labels to posts that are related to elections, directing people to reliable information, prohibiting paid advertisement when it calls into question the legitimacy of elections, and implementing WhatsApp forward limits. But the board has found that in the enforcement of its community standards, Meta sometimes fails to consider the wider political and digital contexts. Many times this led to disproportionate restriction of freedom of expression or to underenforcement of content promoting or inciting violence. Meta must have adequate linguistic and cultural knowledge, and the necessary tools and channels to escalate potentially violating content.