The UK’s communications regulator, Ofcom, says it is prepared to “disrupt” tech platforms that don’t comply with the country’s controversial new Online Safety Act, including cutting them off from payment systems or even blocking them from the UK.
The act—a sprawling piece of legislation that covers a spectrum of issues, from how technology platforms should protect children from abuse to scam advertising and terrorist content—became law in October. Today, the regulator released its first round of proposals for how the act will be implemented and what technology companies will need to do to comply.
The proposed regulations would compel Big Tech companies to tackle avenues for grooming children for abuse on their platforms, and to have “adequate” trust and safety teams to limit the spread of harmful content. Companies will also have to name an individual in the UK who can be held personally accountable for violations.
“Our supervision activity starts today,” says Gill Whitehead, a former Google executive who now heads Ofcom’s Online Safety Group. “From today, we will be supervising, one-to-one, the largest firms and the firms that we think may have the highest risks of certain types of illegal harms … The tech firms need to step up and really take action.”
Ofcom’s proposals give some clarity over what tech companies will need to do to avoid penalties for breaching the act, which could include fines of up to 10 percent of their global revenue and criminal charges for executives. But the proposals are unlikely to reassure messaging platforms and online privacy advocates, who say that the act will compel platforms to undermine end-to-end encryption and create backdoors into their services, opening them up to privacy violations and security risks.
In defending the Online Safety Act, the government and its backers have portrayed it as essential to protecting children online. Ofcom’s first tranche of proposals, which will be followed by more consultations stretching into 2024, focus heavily on limiting minors’ access to disturbing or dangerous content, and on preventing them from being groomed by potential abusers.
Ofcom says its research shows that three out of five children between the ages 11 and 18 in the UK have received unwanted approaches that made them feel uncomfortable online, and that one in six have been sent or been asked to share naked or semi-naked images. “Scattergun” friend requests are used by adults looking to groom children for abuse, Whitehead says. Under Ofcom’s proposals, companies would need to take steps to prevent children from being approached by people outside of their immediate networks, including making it impossible for accounts they’re not connected to to send them direct messages. Their friend lists would be hidden from other users, and they wouldn’t appear in their own connections’ lists.
To comply with this is likely to mean that platforms and websites will need to improve their ability to verify users’ ages, which will mean collecting more data on the people accessing their services. Wikipedia has said that it may have to block access for UK users, because complying would “violate our commitment to collect minimal data about readers and contributors.” Companies in the UK are already subject to some regulations that require them to, for example, prevent underage people from accessing adverts for age-restricted products, but have previously struggled to implement so-called age-gating services that are acceptable to both regulators and customers, according to Geraint Lloyd-Taylor, a partner at the law firm Lewis Silkin. “There does need to be a focus on solutions, not just identifying the problems.” Lloyd-Taylor says.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearWhitehead says that Ofcom would be putting out more detail about specific approaches to age verification in another consultation next month.
One of the most controversial clauses of the Online Safety Act dictates that companies that offer peer-to-peer communications, such as messenger apps like WhatsApp, must take steps to ensure that their services aren’t used to transmit child sexual abuse material (CSAM). That means that companies need to find a way to scan or search the content of users’ messages, something that security experts and tech executives say is impossible without breaking the end-to-end encryption that is used to keep the platforms private.
Under end-to-end encryption, only the sender and recipient of a message can view its content—even the operator of the platform cannot decrypt it. To meet the requirements under the act, platforms would have to be able to look at users’ messages, most likely using so-called client-side scanning, essentially viewing the message at the device level—something that privacy activists have equated to putting spyware on a user’s phone. That, they say, creates a backdoor that could be exploited by security services or cybercriminals.
“Let’s assume the UK government is completely virtuous. And assume that they will use this technology only for its intended purpose. It doesn’t matter because … you can’t stop other actors from using it if they hack you,” says Harry Halpin, CEO and founder of the privacy technology company NYM. “Which means not only will the UK government be reading your messages and have access to your device, but so will foreign governments and cybercriminals.”
Meta’s WhatsApp messaging service, as well as the encrypted platform Signal, threatened to leave the UK over the proposals.
Ofcom’s proposed rules say that public platforms—those that aren’t encrypted—should use “hash matching” to identify CSAM. That technology, which is already used by Google and others, compares images to a preexisting database of illegal images using cryptographic hashes—essentially, encrypted identity codes. Advocates of the technology, including child protection NGOs, have argued that this preserves users’ privacy as it doesn’t mean actively looking at their images, merely comparing hashes. Critics say that it’s not necessarily effective, as it’s relatively easy to deceive the system. “You only have to change one pixel and the hash changes completely,” Alan Woodward, professor of cybersecurity at Surrey University, told WIRED in September, before the act became law.
It is unlikely that the same technology could be used in private, end-to-end encrypted communications without undermining those protections.
In 2021, Apple said it was building a “privacy preserving” CSAM detection tool for iCloud, based on hash matching. In December last year, it abandoned the initiative, later saying that scanning users’ private iCloud data would create security risks and “inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”
Andy Yen, founder and CEO of Proton, which offers secure email, browsing and other services, says that discussions about the use of hash matching are a positive step “compared to where the Online Safety [Act] started.”
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
Gear“While we still need clarity on the exact requirements for where hash matching will be required, this is a victory for privacy,” Yen says. But, he adds, “hash matching is not the privacy-protecting silver bullet that some might claim it is and we are concerned about the potential impacts on file sharing and storage services…Hash matching would be a fudge that poses other risks.”
The hash-matching rule would apply only to public services, not private messengers, according to Whitehead. But “for those [encrypted] services, what we are saying is: ‘Your safety duties still apply,’” she says. These platforms will have to deploy or develop “accredited” technology to limit the spread of CSAM, and further consultations will take place next year.
“Encrypted services carry a higher risk of child sexual abuse material being shared across their platforms, and for services who choose to run their messaging services and file-sharing services encrypted, they now still have to comply with safety duties,” she says—putting significant emphasis on the word “choose.”
Today’s proposed rules also state that technology platforms will need to have clear pathways for users to report harmful content, or to block or report problematic accounts. Companies that use algorithms to recommend content to their users will need to conduct safety tests. And they will need to have “well-resourced and trained” content- and search-moderation teams to manage what goes on on their platforms.
The act applies to any company that has users in the UK, even those without headquarters in the country. Whitehead says she thinks that the size of the UK market means that companies will have a strong incentive to comply. Those that don’t could face serious consequences.
“We have strong enforcement powers in those situations. We have the power to fine up to 10 percent of global turnover, we have the power to prosecute senior managers, and we have the power to disrupt services,” Whitehead says. Blocking platforms isn’t a first resort, she adds—the regulator would rather “make contact with services and work with them to compliance,” but it is an option. “We do have those powers,” she says.