This Is How They Tell Me The World Ends by Nicole Perlroth - A Book Review
Posted on February 11, 2024 • 18 minutes • 3708 words
The Book: This Is How They Tell Me the World Ends: The Cyberweapons Arms Race
Nicole Perlroth is a journalist who covers cybersecurity, digital espionage and sabotage. She has reported on hacks of nuclear power plants, airports, elections and other critical infrastructure. More information about her biograpghy can be found here.
In the book, she shares stories of firsthands actors about certain cases, such as Stuxnet attack on Iran’s nuclear program. It focuses nation states’ efforts to gain power in that new arms race. The book was published on February 09, 2021, and is available on Amazon. If you’re interested in the cybersecurity attacks between nation-states over the last two decades, I would definitely suggest checking out her book to gain insights into that Dark Territory.
I’d like to refer some points from the book that I found intesting:
The first rule of the zero-day market was: Nobody talks about the zero-day market. The second rule of the zero-day market was: Nobody talks about the zero-day market. I’d posed this question many times, and I knew it was the one question nobody in his business would answer. (p.39)
…iDefense became the first shop to publicly open its doors to hackers and start paying bounties for zero-day bugs. (p.52)
The New Hacker’s Dictionary … defines hacker as “one who enjoys the intellectual challenge of creatively overcoming or circumventing limitations.” (p.54)
…another researcher discovered a zero-day exploit in the Jeep Cherokee that allowed them to seize control of the steering wheel, disable the brakes, screw with the headlights, indicators, wipers, and radio and even cut the engine from a remote computer thousands of miles away. Eight months later, the automaker was still dealing with the fallout. (p.77)
The NSA has a strict prepublication review policy: anything former employees publish, for as long as they shall live, must be run by the agency’s minders. Until an NSA review board has cleared them for publication, anything a former NSA employee wishes to publish will remain classified. (p.81)
In 1945, Soviet schoolchildren had presented the American ambassador with an elaborate hand-carving of the Great Seal of the United States. The carving hung in the ambassador’s home office for seven years, until U.S. officials discovered a tiny bug—called “Golden Mouth”—hidden deep in the wood that allowed the Soviets to eavesdrop on the ambassador at will. The furniture had been switched out over time, but the bug kept its foothold in the wall. The bug survived four ambassadors, until its discovery in 1952. (p.91)
Organizations can’t stop the world from changing. The best they can do is adapt. The smart ones change before they have to. The lucky ones manage to scramble and adjust, when push comes to shove. The rest are losers, and they become history. (p.101)
…unless you wrote the source code yourself, you could never be confident that a computer program wasn’t a Trojan horse. (p.105)
NSA agents worked with Crypto AG executives to introduce trapdoors into Crypto AG’s encryption machines, allowing NSA decoders to easily decrypt their contents. (p.114)
So in 1996 … the agency’s Directorate of Science and Technology developed surveillance devices that imitated flying insects and gave birth to the lithium-iodine battery. The CIA developed the battery to improve its surveillance operations, but eventually it would be used in smartphones, electric vehicles, even pacemakers.
“If we didn’t know where they got their hair cut, we weren’t doing our jobs,” one former NSA employee told me. (p.125)
TAO hackers … ripping apart any new hardware and software that hit the market in search of bugs to add to the agency’s zero-day arsenal. (p.127)
At the Energy Department’s Oak Ridge National Laboratory in Tennessee, engineers and nuclear experts had built a near replica of Natanz’s nuclear facility, complete with Iran’s P-1 centrifuges. The engineers understood that to break Iran’s program, they had to break its centrifuges—devices that spin at supersonic speeds, more than 100,000 revolutions per minute—and separate the isotopes that make things go boom. (p.140)
The code was designed to speed up the rate at which the rotors spun to 1400 hertz for exactly fifteen minutes, before returning to normal for twentyseven days. After those twenty-seven days were over, it would slow the speed of the rotors to just 2 hertz for fifty minutes, before returning to normal for another twenty-seven days, and repeating the whole process all over again. (p.147)
…in March 2011, Ralph Langner was in Long Beach. He’d been asked to deliver a ten-minute talk breaking down the Stuxnet code at the annual TED ideas conference. (p.152)
The United States may have thwarted a conventional war, but in releasing Stuxnet on the world, it opened up an entirely new battlefront. The worm had crossed the Rubicon from defensive espionage to offensive cyberweapon, and in just a few years, it would come boomeranging back on us. (p.153)
By 2012 the U.S.’s three-year-old Cyber Command’s annual budget had tripled from $2.7 billion to $7 billion (plus another $7 billion for cyberactivities across the Pentagon), while its ranks swelled from nine hundred dedicated personnel to four thousand, and eventually fourteen thousand by 2020. (p.155)
In 2013 the NSA added a new $25.1 million line item to its classified black budget. That was how much it planned to start spending each year to procure “software vulnerabilities from private malware vendors.” By one estimate, that would enable the agency to purchase as many as 625 zero-days a year, in addition to the vulnerabilities the agency was developing in-house. (p.159)
Increasingly it was losing its best hackers and analysts to higher-paying jobs at private defense contractors like Booz Allen, Northrop Grumman, Raytheon, Lockheed, and Harris, and at boutique zero-day hunting and development firms… (p.160)
…once the intel agencies started allocating more of their budgets for zero-day exploits and attack tools off the private market, there was even less incentive to turn over the underlying zero-day flaws to vendors for patching. (p.160)
…one former VRL employee told me in late 2019, “It’s getting harder to know if you’re selling these tools to the good guys or enabling the bad.” (p.164)
It’s a pain in the ass to find a zero-day. Weaponizing a zero-day is an even bigger pain in the ass. Testing it and making it reliable is the biggest pain in the ass of all. (p.164)
Inside, VRL’s employees were secretly making life hell for the security engineers at companies like Google and Apple, poking holes in their products and weaponizing their code. The tools VRL was developing were the kind that, if discovered, would send security engineers straight to war rooms in Mountain View and Cupertino, to figure out just how far their own government had crawled into their systems, and how to get them out. (p.166)
“Everyone who works there has a monklike dedication to the job,” one former employee told me. “They are more like the Knights of Templar than what you’d see at Google or Facebook.” (p.166)
These former employees used one example over and over again to demonstrate the loyal, Navy SEAL–like culture of the company: the story of an employee who discovered he had kidney failure. Nearly everyone at the company got tissue-typed until they found a match, and a colleague ended up giving him a kidney. This kind of thing was not happening at Google or Facebook. (p.166)
…what really pissed off Aitel’s superiors is what he did after he left the Fort. He co-authored a book with several well-known hackers called The Shellcoder’s Handbook: Discovering and Exploiting Security Holes. It became a bible for aspiring hackers. In it, Aitel detailed specific exploits and attack methodologies that his former bosses felt went too far in disclosing NSA spycraft. (p.173)
The Hacking Team leaks offered an incredible window into how zero-day exploits were being priced, traded, and incorporated into ever-morepowerful off-the-shelf spyware and sold to governments with the most abysmal of human rights. … By late 2015, no intelligence agency on the planet was going to miss out. (p.198)
But Pegasus came with a nifty trick. It could sense when it was killing battery life, shut itself down, and wait for a target to connect to WiFi before extracting any more data. This was, as far as I could tell, the most sophisticated spyware on the commercial market. (p.201)
…you could “remotely and covertly collect information about your target’s relationships, location, phone calls, plans and activities—whenever and wherever they are.” And, their brochures promised, Pegasus was a “ghost” that “leaves no traces whatsoever.” (p.203)
Besides, Mansoor told me, “I would like to be able to fight for my rights, and the rights of others, to gain my freedom from the inside. It’s not easy, but I am doing this because I believe this is the most difficult way for anyone to express their patriotism to their country.” (p.207)
And each sale required approval by Israel’s Ministry of Defense. To date, they told me, NSO had yet to be denied a single export license. (p.209)
“Raytheon told me, ‘It didn’t work.’ Then a year later, I learned from a friend at the company that they’d been using my exploit for months. And I never got paid. It’s an arms race,” Shatner told me. (p.243)
They had spread Flame through Microsoft’s Windows software update mechanism. What made that so terrifying was that 900 million Microsoft computers got patches and updates that way. Infecting Microsoft’s updates was the Holy Grail for hackers, and a nightmare for Redmond. (p.247)
At the Nuclear Regulatory Commission, which regulates nuclear facilities, information about crucial nuclear components was left on unsecured network drives, and the agency had lost track of laptops with critical data. (p.254)
…one of Synack’s hackers had found a way into the Pentagon’s classified networks in under four hours. “This game of zero-day exploits is a big deal,” General Bradford J. “B. J.” Shwedo told me in what I now knew to be a gross understatement. “If you wait to find out about a zero-day on game day, you’re already mauled. Sealing yourself off from the world is not the future of warfighting. In cyber, it’s spy-versus-spy all the time.” (p.255)
…Chris Inglis, the retired NSA deputy director, once said: “If I were to score cyber the way we score soccer, the tally would be 462–452 twenty minutes into the game. In other words, it’s all offense and no defense.” (p.255)
Without the companies’ knowledge or cooperation, the Snowden revelations that fall showed that the NSA, and its British counterpart, GCHQ, were sucking up companies’ data from the internet’s undersea fiber-optic cables and switches. In agency jargon, this was called “upstream” collection, as opposed to “downstream” methods like Prism, in which agencies demand customers’ data from companies through secret court orders. (p.256)
Google was also now laying its own fiber-optic cable beneath the world’s oceans and rigging it with sensors that would alert the company to undersea taps. (p.259)
Project Zero’s team gave vendors ninety days. After that, they would dump their bugs online. (p.261)
Cook summoned my colleague David Sanger and me to a meeting at the Palace Hotel in San Francisco to personally make his case. “Their argument is to put in an exploit for the good guys—it’s bizarre—and it’s a flawed argument from the get-go,” Cook told us. “This is a global market. You should go out and ask governments around the world: if Apple does some of the things the FBI is suggesting—a door, a key, a magic wand that gets them an in—how would you feel about that? The only way to get information—at least currently, the only way we know—would be to write a piece ofsoftware that we view as sort of the equivalent of cancer. It makes the world worse. And we don’t want to participate in anything that makes the world worse.” (p.268)
Even if Apple did what the government was asking, even if it wrote a backdoor just for this one case, that backdoor would become a target for every hacker, cybercriminal, terrorist, and nation-state under the sun. (p.269)
Unnamed hackers had approached the FBI with an alternative way in, a hacking method that allowed the government to bypass Apple’s encryption to access Farook’s iPhone—a zero-day exploit. (p.270)
It was the first time in history that the government had openly copped to paying private hackers top dollar to turn over vulnerabilities in widely used technology. (p.270)
If you didn’t know who you were talking to, the conference organizers reminded us, you should not be asking. (p.271)
“There are two ways to learn about your adversary: Read white papers about your adversary or be the adversary,” Fick told me. (p.272)
I watched hackers “pwn” Apple and Java software by day, then explain over cocktails how to reverse-engineer Tinder to geolocate NSA spies from the Fort Meade parking lot. (p.273)
“Cheating the system is part of the Argentine mentality,” Cesar told me. “Unless you’re rich, you grow up without a computer. To access new software, you have to teach yourself everything from the ground up.” (p.276)
The barrier to entry was so low that for the cost of three F-35 stealth bombers, Iran’s Islamic Revolutionary Guard Corps built a world-class cyber army. (p.293)
Before Stuxnet, the IRGC reportedly budgeted $76 million a year for its fledgling cyber force. After Stuxnet, Iran poured $1 billion into new cyber technologies, infrastructure, and expertise, and began enlisting, and conscripting, Iran’s best hackers into its new digital army. Stuxnet had set Tehran’s nuclear programs back years and helped stave off the Israeli bombs. But four short years later, Iran had not only recovered its uranium but also installed eighteen thousand centrifuges—more than three times the number spinning at the time of the first attack. And now Tehran claimed to have the “fourth-biggest cyber army in the world.” (p.293)
That December, the president announced that the United States would “respond proportionally” to North Korea’s attack. He declined to give specifics, saying only that the United States would hit “in a place and time and manner of our choosing.” Later, officials would tell me Obama was speaking to two audiences that day; he was addressing Iran too. Three days later, a funny thing happened. North Korea’s already dim connection to the outside world went dark for an entire day. (p.301)
Sandworm, the agency warned in an October 29, 2014, security advisory, wasn’t just after General Electric’s clients. It was also targeting clients of two other industrial control software makers: Siemens, the same company the United States and Israelis had hijacked in the Stuxnet attack, and Advantech, one of the leading “Internet of Things” enablers in the world. (p.318)
“We were still stuck in legacy spy-versus-spy Cold War thinking,” a senior official told me. “When we first saw those attacks, we said, ‘That’s what Russia does. That’s what we do. It’s a gentleman’s thing. Nobody goes too far with it.’ Then Ukraine and the election blew the lid right off that theory.” (p.322)
The Heartbleed bug was the result of a classic coding error, a case of buffer over-read, whichallowed anyone to extract data, including passwords and encryption keys, from systems meant to be protected. (p.325)
“Governments are starting to say, ‘In order to best protect my country, I need to find vulnerabilities in other countries’” Schmidt told me before his passing. “The problem is that we all fundamentally become less secure.” (p.327)
Stuxnet had inspired dozens of other countries to join the zeroday hunt, and the United States was losing control over the market it had once dominated. “If someone comes to you with a bug that could affect millions of devices and says, ‘You would be the only one to have this if you pay my fee,’ there will always be someone inclined to pay it,” Schmidt told me. What he said next never left me: “Unfortunately, dancing with the devil in cyberspace is pretty common.” (p.328)
That March, Fancy Bear’s Russian hackers had sent John Podesta, Hillary Clinton’s campaign chairman, a fake Google alert, declaring that he had to change his Gmail password. Podesta had forwarded the email to the DNC’s IT staff for vetting, and in what would become the most tragic typo in American election history, a campaign aide wrote back, “This is a legitimate email.” He had intended to type “illegitimate,” but the damage was done. (p.335)
When linguists began to tease Guccifer 2.0’s responses apart, it was clear that he was no Romanian at all: he’d used Google Translate. This was a Russian influence operation through and through. The Russians had a name for this kind of operation—kompromat, the Russian art of spreading damaging information to discredit their enemies. (p.338)
This, even as Russia’s attack became the most destructive in world history. Months later, Bossert would pin NotPetya’s damages at $10 billion. But some still believe that is a gross underestimate. The world has only been able to tally damages reported by public companies and government agencies. Many smaller organizations surveyed their wreckage in quiet, and publicly denied they’d been hit. (p.366)
And perhaps most telling of all was that they specifically designed their ransomware to avoid infecting Russian computers. The code searched for Cyrillic keyboard settings and when it found them, moved right along—technical proof they were abiding by Putin’s first rule: no hacking inside the Motherland. (p.389)
More recently, intelligence analysts determined that a prominent Russian cybercriminal—the leader of an elite cybercrime group that called itself Evil Corp—wasn’t just working hand-in-hand with the FSB. He was FSB. (p.390)
In Georgia, a database that verified voter signatures on their mailed ballots was locked up by Russian hackers in a ransomware attack that also dumped voters’ registration data online; in Louisiana, the National Guard was called in to stop cyberattacks on smaller government offices that used tools that previously had only been seen in North Korea. Russia’s A Team was caught probing systems in Indiana and California. (p.401)
But in the two decades since 9/11, the threat landscape has been dramatically overhauled. It is now arguably easier for a rogue actor or nation-state to sabotage the software embedded in the Boeing 737 Max than it is for terrorists to hijack planes and send them careening into buildings. Threats that were only hypotheticals a decade ago are now very real. (p.414)
The very institutions charged with keeping us safe have opted, time and time again, to leave us more vulnerable. (p.417)
The annual cost from cyber losses now eclipses those from terrorism. In 2018, terrorist attacks cost the global economy $33 billion, a decrease of thirty-eight percent from the previous year. That same year, a study by RAND Corporation from more than 550 sources—the most comprehensive data analysis of its kind—concluded global losses from cyberattacks were likely on the order of hundreds of billions of dollars. And that was the conservative estimate. Individual data sets predicted annual cyber losses of more than two trillion dollars. (p.418)
Someone had crossed out “Move fast and break things” and replaced it with “Move slowly and fix your shit.” (p.418)
It is nearly impossible to think of a company or government agency that has not been hacked. We now need to take what the NSA itself calls a “defense in-depth” approach, a layered approach to security that begins with the code. (p.419)
Security experts have been arguing for secure design long before the internet. Microsoft’s 2002 Trustworthy Computing directive was a turning point. It wasn’t perfect. There were missteps and setbacks along the way. Windows vulnerabilities still formed the raw matter for Stuxnet, WannaCry, and NotPetya. But in other ways, it worked. Microsoft used to be a punchline; now it is widely seen as a security leader. (p.419)
Today, the average high-end car contains more than 100 million lines of code— more than a Boeing 787, F-35 fighter jet, and space shuttle. (p.420)
…in 2014 when researchers discovered Heartbleed, the bug in the open-source OpenSSL encryption protocol. Two years went by before anyone noticed a gaping hole had left over a million systems vulnerable. (p.420)
After Heartbleed, the non-profit Linux Foundation and tech companies that relied on OpenSSL stepped up to find and fund critical open-source projects. The Linux Foundation, together with Harvard’s Laboratory for Innovation Science, is now midway through a census effort to identify the most critical and widely deployed opensource software in use, with the goal of giving developers the funds, training, and tools to protect it. (p.420)
A secure architecture entails identifying the most critical systems—the crown jewels—whether that is customer data, medical records, trade secrets, production systems, or the brakes and steering systems in our cars, and compartmentalizing them from noncritical systems, allowing interoperability only where critical. (p.421)
Despite the attraction of zero-days, Rob Joyce, the head of TAO, essentially the nation’s top hacker, gave a rare talk four years ago, in which he called zero-days overrated and said unpatched bugs and credential theft is a far more common vector for nation-state attacks. (p.423)
Studies have shown that—digitally speaking—the safest countries in the world, those with the lowest number of successful cyberattacks per machine, are actually the most digitized. The safest are in Scandinavia—Norway, Denmark, Finland, Sweden—and more recently, Japan. Norway, the safest of them all, is the fifth mostdigitized country in the world. But Norwegians implemented a national cybersecurity strategy in 2003 and they revisit and update it every year to meet current threats. (p.425)
Japan may even be more instructive. In Japan, the number of successful cyberattacks dropped dramatically—by more than 50 percent—over the course of a single year, according to an empirical study of data provided by Symantec. Researchers attributed Japan’s progress to a culture of cyber hygiene but also to a cybersecurity master plan that the Japanese implemented in 2005. Japan’s policy is remarkably detailed. It mandates clear security requirements for government agencies, critical infrastructure providers, privatecompanies, universities, and individuals. It was the only national cybersecurity plan, researchers discovered in their study, to address “airgapping” critical systems. In the years following its implementation, researchers found that Japanese devices were better protected than other countries with similar GDPs. (p.425)
As the political scientist Joseph S. Nye put it in the wake of Russia’s 2016 interference: “The defense of democracy in an age of cyber information warfare cannot rely on technology alone.” (p.426)