The UK’s ambitious and controversial proposed internet regulation started with scribblings on the back of a packet for a brie and cranberry sandwich from Pret a Manger. Those notes, from discussions between academics Lorna Woods and William Perrin about how to make tech companies responsible for online harms, became an influential white paper in 2019. That in turn became the foundation of a draft law called the Online Safety Bill, an ambitious attempt to turn the UK into the “safest place in the world to be online,” by regulating how platforms should handle harmful content, including child sexual abuse imagery, cyberbullying, and misinformation.
Since then, Britain has endured three prime ministers (and one lettuce), four digital ministers, a pandemic, and a rocky exit from the European Union. Successive iterations of the ruling Conservative government have expanded the bill that sprang from Woods and Perrin’s paper, mutating it from a genuine attempt to hold tech platforms to account for hosting harmful content, into a reflection of Britain’s post-Brexit political dysfunction.
The current government is widely expected to be voted from power next year, but the draft law returns to the House of Commons today, where members of parliament will have their final chance to debate its content. “It is very different from the sandwich packet, not least because there’s no brie smears on it,” says Woods, a law professor at the University of Essex. More significantly, different Conservative administrations have each left their own mark on it. “I think perhaps that has added to the baroque ornamentation,” Woods says.
Many others are far less measured in their criticism. The bill as it stands today sprawls to more than 260 pages, reflecting how ministers and MPs bolted on their own preoccupations, from cancel culture to security to immigration. Many of the original misinformation provisions have been removed or watered down. Additions to the bill include a controversial requirement that messaging platforms scan content for child sexual abuse images, something that tech companies and privacy campaigners say can only be achieved by weakening end-to-end encryption.
Major platforms, including WhatsApp and Signal, have threatened to pull out of the UK if the law is passed. They probably aren’t bluffing, and the bill probably will pass.
Earlier iterations of the bill took a relatively thoughtful approach to dealing with dangerous content online. Alongside provisions on how to prevent the most clearly illegal and harmful content, such as child sexual abuse material (CSAM), it also acknowledged that sometimes legal content can be harmful because of how it’s amplified or targeted. For example, it may not be illegal to say that vaccines don’t work, but in the context of a deadly pandemic that message could become very harmful if shared widely and then served by platforms’ algorithms over and over to people susceptible to believing it. The bill originally looked at how to stop or limit this “legal but harmful” material from doing damage offline, not necessarily by banning that content, but instead by limiting how it ends up in users’ feeds, or who it can be served to. For example, algorithms might have to be tweaked to stop them from recommending posts that promote suicide to people in distress, or posts about extreme weight loss to young users.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThat approach, of targeting online harms at a platform level, has formed the basis of regulation in the European Union, where the Digital Markets Act and Digital Services Act (DSA) aim to make tech companies accountable for the offline consequences of their business models. But the mutation of the Online Safety Bill has shifted the UK’s focus away from regulating the systems and algorithms that can make content dangerous, and toward the content itself.
The bill now tries to identify what should and shouldn’t be legal speech online, and it would force companies to act on the illegal stuff. Meanwhile, lawful but potentially harmful content is left for platforms to regulate—or let be—via their own terms and conditions. The logic goes that requiring platforms to restrict legal but harmful content would be censorship, and anathemic to free speech; adults should be able to see potentially harmful content and make their own decisions.
Some critics see this shift as a result of the bill being hijacked by the factional, and increasingly extreme, politics of the Conservative Party, which has taken on a populist and nationalist tinge since the divisive vote to leave the European Union in 2016. “The Conservative Party has had to suddenly wrangle different parts of their base into the bill,” says Kyle Taylor, founder and executive director of campaign group Fair Vote UK, who was a supporter of earlier versions of the bill.
Taylor points out that news publishers have exemptions on some of the rules proposed in the bill, as does speech “of democratic importance,” a carve-out that seems to be aimed at making sure that political figures with significant online followings don’t fall foul of the bill. A faction within the Conservative Party has been pushing US-style culture war narratives on divisive issues like trans rights and Covid lockdowns, and they have been quick to label anything that limits their reach as “cancel culture.” Some party figures have flirted with narratives on traffic control measures and climate change that tack close to outright conspiracy theories. “The entities most likely to cause harm at scale are of course registered news publishers and people with large followings like politicians,” Taylor says. “So you’re gonna end up with this bill that’s penalizing people with 18 friends on Facebook and doing nothing to politicians or political figures.”
One of the Conservative Party’s current obsessions is “stopping the boats”—limiting the number of refugees and other migrants arriving in the country across the English Channel. In January, the government said content portraying migrants crossing the Channel in a “positive light” will be illegal under the bill. But other content that might more conventionally be seen as harmful, such as medical misinformation, isn’t on the list. That means, for example, it will be Meta’s policy on vaccine misinformation, set in California, that determines whether antivax content is removed or deprioritized in the UK.
“It’s just simply that, in [the government’s] eyes, you have the free right to spread disinformation,” Taylor says. “It’s become this culture war issue, where they’ve had to maintain the far right of the party and the pure free-speech folks. And so they’ve just gotten rid of everything that would actually do something about disinformation. It’s prioritizing politics over good policy.”
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearPlenty of other ideas have also been tacked onto the bill. The current text includes age checks for porn sites and measures against scam ads and nonconsensual sharing of nude images.
As the bill nears passage into law, the most contentious—and, in the short term, consequential—dispute over its content is not about what online content should be illegal online, but about the privacy implications of the government’s proposals. The current draft says that platforms such as messaging apps will need to use “accredited technology” to scan messages for CSAM material. That, tech companies and cybersecurity experts say, is a de facto ban on full end-to-end encryption of messages. Under end-to-end encryption, only the sender and recipient of a message can read the contents of a message.
The UK government says it’s up to tech companies to figure out a technical solution to that conflict. “They're rather disingenuously saying, ‘We're not going to touch end-to-end encryption, you don't have to decrypt anything,’” says Alan Woodward, a visiting professor in cybersecurity at the University of Surrey. “The bottom line is, the rules of mathematics don't allow you to do that. And they just basically come back and say, ‘Nerd harder.’”
One possible approach is client-side scanning, where a phone or other device would scan the content of a message before it’s encrypted and flag or block violating material. But security experts say that creates many new problems. “You just cannot do that and maintain privacy,” Woodward says. “The Online Safety Bill basically reintroduces mass surveillance and says, ‘We have to search every phone, every device, just in case we find one of these images.’”
Apple had been working on a tool for scanning images on its iCloud storage service to identify CSAM, which it hoped could prevent the proliferation of images of abuse without threatening users’ privacy. But in December it shelved the project, and in a recent response to criticism from organizations that campaign against child abuse, Apple said that it didn’t want to risk opening up a backdoor for broader surveillance. The company’s argument, echoed by privacy campaigners and other tech companies, is that if there’s a way to scan users’ files for one purpose, it’ll end up being used for another—either by criminals or by intrusive governments. Meredith Whittaker, president of the secure messaging app Signal, called the decision a “death knell” for the idea that it’s possible to securely scan content on encrypted platforms.
Signal has vocally opposed the UK bill and said it may pull out of the country if it’s passed in its current form. Meta has said the same for WhatsApp. Smaller companies, like Element, which provides secure messaging to governments—including the UK government—and militaries, say they may also have to leave. Forcing companies to scan everything passing through a messaging app “would be a catastrophe, because it fundamentally undermines the privacy guarantees of an encrypted communication system,” says Matthew Hodgson, Element’s CEO.
A legal analysis of the bill commissioned by the free-expression organization Index on Censorship found that it would grant the British telecoms regulator, Ofcom, greater surveillance powers than the security services, with dangerously weak checks and balances on how they were used. Civil society organizations and online privacy advocates point out that these powers are being put in place by a government that has cracked down on the right to protest and given itself far-reaching powers to surveil internet users under its 2016 Investigatory Powers Act. In July, Apple protested against proposed changes to that law, which it says would have meant that tech companies would have to inform the UK government each time it patched security breaches in its products.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThe government's proposed approach to the trade-off between message security and CSAM enforcement has won the backing of child-protection organizations, which have been generally supportive of the Online Safety Bill. Even critics of the legislation’s current form acknowledge that the tech industry may need to make concessions on its hard-line opposition. “Privacy is important, but then so are the fundamental human rights of children who are being abused. And I think that's perhaps one point that gets lost in the debate, when we're talking about rights,” says Woods, whose sandwich packet started the whole saga. “We're not just talking about one set of rights, we're talking about a range of rights. And so you have got a hugely difficult balancing act to pull off.” And even though there are many parts of the legislation that have strayed from the original vision, there are still good ideas within it, Woods says.
Whether a compromise is eventually reached or not, the consensus among experts who spoke to WIRED is that the Online Safety Bill is likely to pass today. By the time it clears Parliament and makes it into law, the government will have no more than a year to run on its term, and the Conservative Party looks set to be ousted from power next fall. A sprawling and controversial new internet regime is likely to be a lasting part of its legacy, because the law is unlikely to be rapidly unwound. While some—including Woods—think that there’s an element of bluff in the tech companies’ threats of pulling out of the UK, others say the country’s diminished place in the world post-Brexit means that the government could be overplaying its hand.
“I think Signal will leave. I think WhatsApp will leave, because Meta has other products in the country. And I think Apple will stop operating iMessage,” says Taylor, of Fair Vote UK. “I think we have to remember that we used to be part of the largest economy in the world with serious sway against major corporations. Now, as a smaller middle power, in the UK versus Facebook, Facebook wins.”