The UK’s Controversial Online Safety Act Is Now Law

Jeremy Wright was the first of five UK ministers charged with pushing through the British government’s landmark legislation on regulating the internet, the Online Safety Bill. The current UK government likes to brand its initiatives as “world-beating,” but for a brief period in 2019 that might have been right. Back then, three prime ministers ago, the bill—or at least the white paper that would form its basis—outlined an approach that recognized that social media platforms were already de facto arbiters of what was acceptable speech on large parts of the internet, but that this was a responsibility they didn’t necessarily want and weren’t always capable of discharging. Tech companies were pilloried for things that they missed, but also, by free speech advocates, for those they took down. “There was a sort of emerging realization that self-regulation wasn’t going to be viable for very much longer,” Wright says. “And therefore, governments needed to be involved.”

The bill set out to define a way to handle “legal but harmful” content—material that wasn’t explicitly against the law but which, individually or in aggregate, posed a risk, such as health care disinformation, posts encouraging suicide or eating disorders, or political disinformation with the potential to undermine democracy or create panic. The bill had its critics—notably, those who worried it gave Big Tech too much power. But it was widely praised as a thoughtful attempt to deal with a problem that was growing and evolving faster than politics and society were able to adapt. Of his 17 years in parliament, Wright says, “I’m not sure I’ve seen anything by way of potential legislation that’s had as broadly based a political consensus behind it.”

Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent today. It is no longer world-beating—the European Union’s competing Digital Services Act came into force in August. And the Online Safety Act enters into law as a broader, more controversial piece of legislation than the one that Wright championed. The act’s more than 200 clauses cover a wide spectrum of illegal content that platforms will be required to address and give platforms a “duty of care” over what their users—particularly children—see online. Some of the more nuanced principles around the harms caused by legal but harmful content have been watered down, and added in is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.) Platforms will have to prevent younger users from seeing age-inappropriate content, such as pornography, cyberbullying, and harassment; release risk assessments on potential dangers to children on their services; and give parents easy pathways to report concerns. Sending threats of violence, including rape, online will now be illegal, as will assisting or encouraging self-harm online or transmitting deepfake pornography, and companies will need to quickly act to remove them from their platforms, along with scam adverts.

In a statement, UK Technology Secretary Michelle Donelan said: “The Bill protects free speech, empowers adults and will ensure that platforms remove illegal content. At the heart of this Bill, however, is the protection of children. I would like to thank the campaigners, parliamentarians, survivors of abuse and charities that have worked tirelessly, not only to get this Act over the finishing line, but to ensure that it will make the UK the safest place to be online in the world.”

Enforcement of the act will be left to the UK’s telecommunications regulator, Ofcom, which said in June that it would begin consultations with industry after royal assent was granted. It’s unlikely that enforcement will begin immediately, but the law will apply to any platform with a significant number of users in the UK. Companies that fail to comply with the new rules face fines of up to £18 million ($21.9 million) or 10 percent of their annual revenue, whichever is larger.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Some of the controversy around the act is less about what is in it and more about what isn’t. The long passage of the legislation means that its development straddled the Covid-19 pandemic, giving legislators a live view of the social impact of mis- and disinformation. The spread of anti-vaccination and anti-lockdown messages became an impediment to public health initiatives. After the worst of the pandemic was over, those same falsehoods fed into other conspiracy theories that continue to disrupt society. The original white paper that was the bill’s foundation included proposals for compelling platforms to tackle this kind of content—which individually might not be illegal but which en masse creates dangers. That’s not in the final legislation, although the act does create a new offense of “false communications,” criminalizing deliberately causing harm by communicating something the sender knows to be untrue.

“One of the most important things was tackling harms that happen at scale. And because it’s focused so much on individual pieces of content, it’s missed that,” says Ellen Judson, head of the digital research hub at the think tank Demos. The act includes strict rules forcing platforms to move swiftly to remove any illegal post—such as terrorist content or child sexual abuse material—but not on disinformation campaigns comprised of a drip-drip of misleading content, failing to understand that “when that turns into things going viral and spreading, then the harm can occur cumulatively.”

Wright says that the exclusion of disinformation and misinformation from the bill was partly due to confusion between the remits of different departments. The Department of Culture, Media and Sport “was told that the Cabinet Office would be taking care of all of this. ‘Don’t you worry your pretty little heads about it, it’ll be done elsewhere in something called the Defending Democracy agenda,’” he says. “And then I think, subsequently, it wasn’t really. So I think … there still is a gap there.”

Under the Act, bigger platforms will be expected to police potentially harmful, but not illegal, content by applying their own standards more consistently than they currently do—something that free-speech campaigners have decried as giving private companies control over what’s acceptable discourse online, but which some experts on dis- and misinformation say is a cop-out that means Big Tech will be less accountable for spreading falsehoods. Legal experts, however, say compliance with the law will require platforms to be more transparent and proactive. “They have to put all of those processes in place as to how their decisions will be made, or they risk actually being seen as a platform that is controlling all kinds of free speech,” says Emma Wright, technology lead at the law firm Harbottle & Lewis. That’s likely to become quite a significant burden. “It’s the new GDPR,” she says.

By far the most divisive clause out of the more than 300 pages of the Online Safety Act is Section 122, which has been widely interpreted as compelling companies to scan users’ messages to make sure that they aren’t transmitting illegal material. That would be incredibly difficult—perhaps even impossible—to do without breaking the end-to-end encryption on platforms such as WhatsApp and Signal. End-to-end encryption means that the sender and recipient of a message can see its content but the owner of the platform that it’s sent on cannot. The only way to comply with the law, experts say, would be to put so-called client-side scanning software on users’ devices to examine messages before they’re sent, which would make the encryption largely useless. The government said during the bill’s development that companies could find a technical solution to scan messages without undermining encryption; companies and experts countered that that technology doesn’t, and may never, exist.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

“That gives Ofcom, as a regulator, the ability to obligate people like us to go and put third-party content monitoring [on our products] that unilaterally scans everything going through the apps,” Matthew Hodgson, CEO of encrypted messaging company Element, told WIRED before the bill passed. “That’s undermining the encryption and providing a mechanism where bad actors of any kind could compromise the scanning system in order to steal the data flying around the place.”

Companies whose products depend on end-to-end encryption threatened to leave the country, including Signal. Meta said it may pull WhatsApp from the UK if the bill were to pass. That cliff edge has come and gone, and both services are still available—albeit after an 11th-hour restatement by the government that it wouldn’t force platforms to adopt nonexistent technology to scan users’ messages—which was seen by some as a climbdown.

However, the clause remains in the act, which worries privacy and free-speech activists, who see it as part of a spectrum of threats against encryption. If the Online Safety Act means companies have to remove encryption or circumvent it using client-side scanning, “it then potentially opens [data] up to being scooped up into the broader surveillance apparatus,” according to Nik Williams, policy and campaigns officer at the campaign group Index on Censorship.

The Online Safety Act has concerning overlaps with another piece of legislation, the Investigatory Powers Act, which allows the government to compel platforms to remove encryption. Williams says the overlap between the two pieces of legislation creates “a surveillance gateway between the OSB and the IPA in that this can give the security services, such as MI5, MI6, and GCHQ, access to data they previously could not access … I would say it’s probably an unprecedented expansion of surveillance powers.”

The morning after the Online Safety Bill passed through the House of Lords, the UK Home Office launched a new campaign against encrypted messaging, specifically targeting Facebook Messenger.

Former minister Jeremy Wright says that the question over encryption “is frankly not resolved. I think the government has sort of dodged around giving a concluded view on what it means for encryption.” However, he says, the answer is unlikely to be as absolute as the act’s opponents are making out. Encryption won’t be banned, he says, but platforms will have to explain how their policies around it balance safety with their users’ right to privacy. “If you can meet those [safety] duties by using encryption or with encryption as part of the service, you’re fine,” he says. If not, “you have a problem … it can’t be true, surely, that a platform is entitled to say, ‘Well, I operate encryption, so that’s a get-out-of-jail-free card for me on the safety duties.’”

About Peter Guest

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply