Massive layoffs across the tech sector have hit trust and safety teams hard over the past year. But with wars raging in Ukraine and the Middle East and more than 50 elections taking place in the next 12 months, experts worry that a nascent industry of startups created to keep people safe online won’t be able to cope.
The cuts made headlines a year ago, when X (then Twitter) fired 3,700 people—including hundreds in trust and safety roles. Since then, Meta, Alphabet, and Amazon have made similar cuts. The layoffs at X inspired other platforms to do the same, argues Sabhanaz Rashid Diya, founding director at tech policy think tank the Tech Global Institute and a former member of Meta’s policy team. “In many ways, Twitter got away with it,” she says. “That’s given the other companies the confidence to say, ‘You know what? It’s OK. You can survive and not face a terrible consequence.’”
Still, the cost of these cuts is arguably already evident in the way major platforms have scrambled to respond to the war between Israel and Hamas. And the shift away from in-house trust and safety teams has created an opening for consultancies and startups to offer something new: trust and safety as a service.
These companies, many of them founded and staffed by people with Big Tech pedigrees, let platforms “buy rather than build” trust and safety services, says Talha Baig, a former Meta engineer whose startup, Sero AI, recently received backing from accelerator Y Combinator. “There is a lot more labor out on the marketplace, and there’s also a lot more customers willing to buy that labor.”
But experts warn that outsourcing trust and safety also means outsourcing responsibilities to teams with no power to change the way platforms actually work.
Sahar Massachi, a former member of Meta’s civic integrity team and cofounder and executive director of the Integrity Institute think tank, worries that by outsourcing key functions, platforms may be undermining their ability to improve products. Trust and safety issues can sometimes be more about product design than active moderation—should a user be able to reshare content? How much weight should different metrics be given within a recommendation algorithm? “The vendors could be great, but they won’t be able to have insight into that because of the ways that companies work,” Massachi says.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThe same is true of the AI systems that companies use to help flag potentially dangerous or abusive content. Platforms often use huge troves of data to build internal tools that help them streamline that process, says Louis-Victor de Franssu, cofounder of trust and safety platform Tremau. But many of these companies have to rely on commercially available models to build their systems—which could introduce new problems.
“There are companies that say they sell AI, but in reality what they do is they bundle together different models,” says Franssu. This means a company might be combining a bunch of different machine learning models—say, one that detects the age of a user and another that detects nudity to flag potential child sexual abuse material—into a service they offer clients.
And while this can make services cheaper, it also means that any issue in a model an outsourcer uses will be replicated across its clients, says Gabe Nicholas, a research fellow at the Center for Democracy and Technology. “From a free speech perspective, that means if there’s an error on one platform, you can’t bring your speech somewhere else–if there’s an error, that error will proliferate everywhere.” This problem can be compounded if several outsourcers are using the same foundational models.
By outsourcing critical functions to third parties, platforms could also make it harder for people to understand where moderation decisions are being made, or for civil society—the think tanks and nonprofits that closely watch major platforms—to know where to place accountability for failures.
“[Many watching] talk as if these big platforms are the ones making the decisions. That’s where so many people in academia, civil society, and the government point their criticism to,” says Nicholas,. “The idea that we may be pointing this to the wrong place is a scary thought.”
Historically, large firms like Telus International, Teleperformance, and Accenture would be contracted to manage a key part of outsourced trust and safety work: content moderation. This often looked like call centers, with large numbers of low-paid staffers manually parsing through posts to decide whether they violate a platform’s policies against things like hate speech, spam, and nudity. New trust and safety startups are leaning more toward automation and artificial intelligence, often specializing in certain types of content or topic areas—like terrorism or child sexual abuse—or focusing on a particular medium, like text versus video. Others are building tools that allow a client to run various trust and safety processes through a single interface.
Big Tech companies have tended to see trust and safety as a cost center, says Baig—something they have to do to keep regulators and civil society groups at bay, but without much monetary value. But that soon may change. The European Union’s Digital Services Act and the UK’s Online Safety Act, for instance, have created new obligations for tech companies big and small to monitor what happens on their platforms, and these pieces of legislation allow governments to levy huge fines.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
Gear“Companies don’t change the way in which they moderate content on their platform to gain 5, 10, 30 percent efficiency,” says Tremau’s Franssu. “What will motivate them is if they’re scared of getting fined, especially fines as big as 6 percent of global annual revenue, or criminal liability, as we may see in the UK.”
New regulations in the UK and Europe will also come to bear on smaller platforms, particularly around the kinds of content and services children can access. Startups may prefer to buy trust and safety as a service, rather than building their own teams and systems, says Sara Ittelson, a partner at the venture fund Accel, which has invested in the trust and safety tool Cinder. “It used to be that companies thought that trust and safety issues were only surfacing for platforms of a particular size,” she says. “But in reality, you’re going to get them pretty early on.” And people, she argues, ultimately don’t want to use platforms they don’t feel safe on or that are full of junk content.
The explosion of interest in generative AI has only increased the pressure on companies to address trust and safety issues earlier in their life cycles. Generative AI tools can now be used to manufacture and share child sexual abuse material and nonconsensual pornography, both of which would violate most platforms’ guidelines. “There’s much greater awareness as to how these tools can be exploited,” Ittelson says. This has raised questions for companies about how they are going to make sure their platforms aren’t overrun with generative AI content, or how they’re going to ensure their tools aren’t abused.
“Gen AI is making it 10 times worse,” says Dror Nahumi, a partner at Norwest Venture Partners, which has invested in trust and safety startup ActiveFence. “If I’m a bad actor and I was creating an article a week, now I could create 10 different posts in the same week without making any extra effort.”
While investors that spoke to WIRED were hesitant to guess at the potential size of the trust and safety as a service industry, ActiveFence, which was founded in 2017 and is amongst the oldest players in field, raised $100 million in 2021 and was valued at about half a billion dollars in 2021. And its cofounder, Noam Schwartz, says that valuation has risen.
While it’s still nascent, the industry is clearly growing. “This is exactly the way the cybersecurity industry was 20 years ago,” says Schwartz. A 2020 report from the venture capital firm Paladin Capital found that the industry had already raised over $1 billion in funding, and a 2023 report from the UK Department for Science Innovation, and Technology estimated that “Safety Tech,” which includes everything from content moderation to scam detection, was on track to hit £1 billion ($1.22 billion) in revenue by the mid-2020’s.
Though Nahumi says the Big Tech layoffs may indicate that there is, momentarily, less appetite to spend on trust and safety in general, “in the long term, we see that as a good thing for the companies in the space because it means that [tech companies] will have to rely more and more on services from companies that specialize in the space, and not something built in-house.”