Microsoft said in June that a China-backed hacking group had stolen a cryptographic key from the company's systems. This key allowed the attackers to access cloud-based Outlook email systems for 25 organizations, including multiple US government agencies. At the time of the disclosure, however, Microsoft did not explain how the hackers were able to compromise such a sensitive and highly guarded key, or how they were able to use the key to move between consumer- and enterprise-tier systems. But a new postmortem published by the company on Wednesday explains a chain of slipups and oversights that allowed the improbable attack.
Such cryptographic keys are significant in cloud infrastructure because they are used to generate authentication “tokens” that prove a user’s identity for accessing data and services. Microsoft says it stores these sensitive keys in an isolated and strictly access-controlled “production environment.” But during a particular system crash in April 2021, the key in question was an incidental stowaway in a cache of data that crossed out of the protected zone.
“All the best hacks are deaths by 1,000 paper cuts, not something where you exploit a single vulnerability and then get all the goods,” says Jake Williams, a former US National Security Agency hacker who is now on the faculty of the Institute for Applied Network Security.
After the fateful crash of a consumer signing system, the cryptographic key ended up in an automatically generated “crash dump” of data about what had happened. Microsoft's systems are meant to be designed so signing keys and other sensitive data don't end up in crash dumps, but this key slipped through because of a bug. Worse still, the systems built to detect errant data in crash dumps failed to flag the cryptographic key.
With the crash dump seemingly vetted and cleared, it was moved from the production environment to a Microsoft “debugging environment,” a sort of triage and review area connected to the company's regular corporate network. Once again though, a scan designed to spot the accidental inclusion of credentials failed to detect the key's presence in the data.
Sometime after all of this occurred in April 2021, the Chinese espionage group, which Microsoft calls Storm-0558, compromised the corporate account of a Microsoft engineer. According to Microsoft, that target engineer's account was itself compromised with a stolen access token obtained from a machine infected with malware, though it hasn't shared how that infection occurred.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearWith this account, the attackers could access the debugging environment where the ill-fated crash dump and key were stored. Microsoft says it no longer has logs from this era that directly show the compromised account exfiltrating the crash dump, “but this was the most probable mechanism by which the actor acquired the key.” Armed with this crucial discovery, the attackers were able to start generating legitimate Microsoft account access tokens.
Another unanswered question about the incident had been how the attackers used a cryptographic key from the crash log of a consumer signing system to infiltrate the enterprise email accounts of organizations like government agencies. Microsoft said on Wednesday that this was possible because of a flaw related to an application programming interface that the company had provided to help customer systems cryptographically validate signatures. The API had not been fully updated with libraries that would validate whether a system should accept tokens signed with consumer keys or enterprise keys, and as a result, many systems could be tricked into accepting either.
The company says it has fixed all of the bugs and lapses that cumulatively exposed the key in the debugging environment and allowed it to sign tokens that would be accepted by enterprise systems. But Microsoft's recap still does not fully describe how attackers compromised the engineer's corporate account—such as how malware capable of stealing an engineer's access tokens ended up on its network—and Microsoft did not immediately respond to WIRED's request for more information.
The fact Microsoft kept limited logs during this time period is significant, too, says independent security researcher Adrian Sanabria. As part of its response to the Storm-0558 hacking spree overall, the company said in July that it would expand the cloud logging capabilities that it offers for free. “It's particularly notable because one of the complaints about Microsoft is that they don't set up their own customers for security success,” Sanabria says. “Logs disabled by default, security features are an add-on requiring additional spending, or more premium licenses. It appears they themselves got bit by this practice.”
As Williams from the Institute for Applied Network Security points out, organizations like Microsoft must face highly motivated and well-resourced attackers who are unusually capable of capitalizing on the most esoteric or improbable mistakes. He says that from reading Microsoft's latest updates on the situation, he is more sympathetic to why the situation played out the way it did.
“You'll only hear about highly complex hacks like this in an environment like Microsoft's,” he says. “In any other organization, the security is relatively so weak that a hack doesn't need to be complex. And even when environments are pretty secure, they often lack the telemetry—along with the retention—needed to investigate something like this. Microsoft is a rare organization that has both. Most organizations wouldn't even store logs like this for a few months, so I'm impressed that they had as much telemetry as they did."
Update 9:55 am, September 7, 2023: Added new details about how the attackers compromised a Microsoft engineer's account, which made theft of the signing key possible.