Sercan Azizoğlu's Personal Website
July 27, 2024

Cybersecurity First Principles - by Rick Howard - A Book Review

Posted on July 27, 2024  •  35 minutes  • 7297 words
Table of contents

Author

Rick Howard is a former Commander of the US Army’s Computer Emergency Response Team. He is currently the chief analyst at Cyberwire: Amazon Page and the book's official page that was published on April 25, 2023.

Book

In his book, the author proposes a fundamental principle for cybersecurity strategies:

…our foundational first principle, our cybersecurity cornerstone that we will build the entire infosec program on must address three elements: probability, materiality, and time. If that’s true, then here is my proposal for the ultimate cybersecurity first principle and the thesis for this book: “Reduce the probability of material impact due to a cyber event over the next three years.” (p.36)

From both the business and national security perspectives, that proposal has great potential. Whatever happens in cyberspace may create real-world impacts that recently occurred on July 19, 2024, due to CrowdStrike.

He also provides controls and detailed suggestions for the principle to minimize the impact:

To reduce the probability of material impact to our organization due to a cyber attack, our first principle cybersecurity strategies include risk forecasting, zero trust, resilience, automation, and intrusion kill chain prevention. (p.116)

While calculating risk, his main suggestion is quantifying every possible aspect to make business decisions easier for executives:

Get rid of the heat maps. Embrace the idea that probabilities are nothing more than a measure of uncertainty. Use real numbers. (p.197)

That’s the road map of his proposal from the official page of the book's website:

I want to cite specific points from the book:

Citations

…these ideas are not checklists. They represent ways to reduce the probability of material impact. Depending on your environment, some will work better than others. (p.12)

The Canon project (cybersecuritycanon.com) is a security professional community effort to identify all the books that cybersecurity professionals should read. (p.14)

For first principles must not be derived from one another nor from anything else, while everything has to be derived from them. (p.17)

Rene Descartes, published his “Principles of Philosophy.” He starts “with the most common matters, as, for example, that the word PHILOSOPHY signifies the study of wisdom, and that by wisdom is to be understood not merely prudence in the management of affairs, but a perfect knowledge of all that man can know, as well for the conduct of his life as for the preservation of his health and the discovery of all the arts.” (p.17)

In the first place, they must be so clear and evident that the human mind, when it attentively considers them, cannot doubt of their truth; in the second place, the knowledge of other things must be so dependent on them as that though the principles themselves may indeed be known apart from what depends on them. (p.18)

Descartes’ approach, by doubting everything, established the ultimate first principle of philosophy: “I think, therefore I am (Cogito, ergo sum). (p.18)

They recognized some inconsistencies in the current set of rules used by the math community at the time. You could use the same rules to get two different and absolutely correct results, something called the Russell paradox. (p.18)

Whitehead and Russell famously wrote this line: “The above proposition is occasionally useful.” And you all thought that math nerds weren’t funny. Shame on you. (p.18)

the first step in solving any problem is recognizing that you have a problem. (p.19)

…security professionals still talk about today when you hear them discuss the idea of shifting left or security by design. (p.20)

In 1975, Jerome Saltzer and Michael Schroeder published their paper, “The Protection of Information in Computer Systems,” in Proceedings of the IEEE. In it, they lay out the early beginnings of the CIA triad, even though they don’t use that exact terminology. (p.20)

…they promote an idea called fail‐safe defaults, meaning deny everything first and allow by exception. (p.20)

Dr. Fred Cohen published the first papers in 1991 and 1992 that used defense in depth to describe a common cybersecurity model in the network defender community. He didn’t invent the phrase, but he is most likely the first one to describe it in a paper. (p.20)

From the 1990s until present day, the common practice has been to add additional control tools behind the firewall to provide more granular functions. In the early days, we added intrusion detection systems and antivirus systems. All of those tools together formed something called the security stack, and the idea was that if one of the tools in the stack failed to block an adversary, then the next tool in line would. If that one failed, then the next would take over. That is defense in depth. (p.21)

In 1998, Donn Parker published his book Fighting Computer Crime: A New Framework for Protecting Information, where he strongly condemns the elements in the CIA triad as being inadequate. He never mentions the phrase “CIA triad,” though. He proposed adding three other elements (possession or control, authenticity, and utility) that eventually became known as the Parkerian Hexad, but the idea never really caught on for reasons probably only a marketing expert could explain. (p.21)

Complete knowledge of a system is unobtainable; therefore, uncertainty will always exist in our understanding of that system. (p.23)

The idea of first principle thinking has been around since almost the beginning of enlightened scientific thought. Applying the concept to cybersecurity is a relatively new idea, though. (p.24)

With an organization like NIST proclaiming its authenticity as late as 2020, the CIA triad is the de facto cybersecurity first principle. (p.24)

Besides, hackers use code exploitation in only less than ~10 percent of the publicly known breaches. (p.29)

…focus only on what is material to the business. Everything else is nice to have. (p.35)

The risk management team at Datamaran defines materiality this way: A material issue can have a major impact on the financial, economic, reputational, and legal aspects of a company, as well as on the system of internal and external stakeholders of that company. (p.35)

The “how” is a collection of tactics, or discrete steps, that we might take that will bring us closer to achieving the goals of our strategy. (p.42)

The World Economic Forum formalized resilience in 2012: …the ability of systems and organizations to withstand cyber events. (p.48)

In 2017, the International Standards Organization (ISO) defined it (resilience) as follows: “…the ability of an organization to absorb and adapt in a changing environment to enable it to deliver its objectives and to survive and prosper.” (p.48)

We know that the adversaries have automated their infrastructure. If the infosec world continues to operate in manual mode, it’s similar to what Sean Connery said in the 1987 movie The Untouchables. We’re bringing a knife to a gunfight. Viewed through that lens, automation is not merely a nice‐to‐have feature, something that we will do when we get the time. Automation becomes the lynch pin to the entire first principle strategy deployment. (p.51)

Nihil Facile Est. His translation: “Nothing is easy.” Words to live by. (p.62)

When new vulnerabilities and exploits pop up, do the following:

Dmitry Raidman (the CTO at Cybeats) says, “The bad guys are on the hunt for vulnerable open source software (OSS) supply chain components so that they can trojanize other legit commercial and open source products.” (p.70)

In parallel, we have mostly turned a blind eye to new software coming in through the delivery door to update our production systems. Even the cybersecurity leaders who do monitor and manage this situation rely on manual, homegrown, and incomplete tooling to get this done. (p.70)

Spearheaded by the Linux Foundation in 2010, the Software Package Data Exchange (SPDX), also known as ISO/IEC 5962, became the international open standard for security, license compliance, and other software supply chain artifacts in August 2021. (p.71)

In 2015, the International Organization for Standardization (ISO) released ISO/IEC 19770‐2 for Software identification tags (SWID). This format creates a template in XML format to identify and describe software components and relevant patches. Then, in 2017, the Open Web Application Security Project Foundation (OWASP) designed CycloneDX, a lightweight standard with features of both SPDX and SWID. (p.72)

But what we do have is a standard way to articulate vulnerability information in component software. It’s called a Vulnerability Exploitability Exchange (VEX) document. According to Dmitry, “VEX is a result of the work by the continuation of the NTIA Working group that created the SBOM standard” and is managed by CISA. (p.73)

It helps that the U.S. government, at some point, will mandate that all software suppliers provide SBOM information as part of their contract. I believe that will tip the dominoes and the rest of the industry will follow that. (p.73)

In 2002, the U.S. Congress passed the famous Sarbanes–Oxley law, which, among many other things, held companies liable for bad access control. (p.75)

That said, for enterprise security, you absolutely can’t pursue the zero trust first principle strategy until you have a robust identity management system in place. (p.78)

We still publicly shame those users in annual reports of the most common and lame passwords used by everybody on the Internet, mostly some combination of “12345” and “password.” This is essentially victim blaming and faults people for being exceptionally bad at using a stop‐gap identity system invented in the early 1960s. That doesn’t seem right. (p.80)

Authenticators use an Internet Engineering Task Force (IETF) algorithm to generate one‐time codes called time‐based one‐time passwords (TOTPs). (p.84)

It’s probably fine for run‐of‐the‐mill Internet use, like logging into the library. But if you have material information to protect or if you’re a spy, steer clear of SMS authentication. (p.86)

Fast Identity Online (FIDO) is the standards body that is pushing U2F authentication technologies. (p.87)

In the early days of tracking cyber spies (2000s), one of the indicators of Chinese government involvement was the time when the attacks occurred, mostly between 9 a.m. and 5 p.m. Shanghai time. (p.90)

Today, if I’m planning an offensive cyber operation, there would always be a false flag component to emulate some known adversary and leave behind time zone traces that match. (p.90)

Zero trust initiatives fail because network defenders don’t allocate enough resources in terms of people and processes to manage them. At worst, some of us think that we can flip a switch and the system will manage itself. (p.92)

…deciding which employees get access to which company resources is not a decision we want sitting with the vaunted two‐guys‐and‐a‐dog team. That’s a decision that should be addressed in policy at the senior levels of your organization. (p.92)

Even if you work in a small to medium‐sized company, setting access policy is a business process decision, not an IT decision. (p.92)

We had everything, all of the bells and whistles that you could possibly want in a security stack. Well, he ran out of money buying the tools before he hired the people we needed to manage it all. (p.93)

…zero trust is a philosophy and a journey without end, not a product. It’s a way of life, a strategy that directly supports our ultimate cybersecurity first principle: reduce the probability of material impact. (p.94)

The first published was the original Lockheed Martin Kill Chain Paper that described the strategy. The second was the DOD’s Diamond model that operationalized Cyber Threat Intelligence (CTI) teams along the kill chain idea. The third was MITRE ATT&CK; the best open source collection of adversary playbook intelligence in the world. (p.100)

The Air Force’s target acquisition model is called Find, Fix, Track, Target, Engage, and Assess, also known as F2T2EA, because, you know, the military loves acronyms more than cyber folks. More simply, they call it the kill chain. Jumper’s mandate to the Air Force was to compress the kill chain from hours or days to less than 10 minutes. (p.101)

The year 2010 was big in cybersecurity with multiple groundbreaking milestones and revolutionary ideas. Google sent out shockwaves when it announced that it had been hacked by the Chinese government. John Kindervag published his foundational “No More Chewy Centers” paper on zero trust. The world also learned about the U.S./Israeli cyber campaign (Olympic Games, commonly referred to as Stuxnet) designed to slow down or cripple Iran’s nuclear weapon production capability and demonstrated the difficulty of crafting attack sequences for hard cyber targets. Finally, Lockheed Martin published its seminal paper, “Intelligence‐Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains,” written by Eric Hutchins, Michael Cloppert, and Rohan Amin; this was the symbolic starting gun for the subject of this. (p.101)

From the paper, “Intelligence‐driven computer network defense is a risk management strategy that requires a new understanding of the intrusions themselves, not as singular events, but rather as phased progressions.” It’s a simple and elegant strategy: know the enemy. (p.102)

As perimeter defense and defense in depth is passive (designed to defeat the generic hacker), the intrusion kill chain model is active and designed to defeat specific cyber adversaries. (p.103)

One is a strategy document (Lockheed Martin), one is an operational construct for defensive action (MITRE), and one is a methodology for cyber threat intelligence teams (Diamond). (p.107)

MITRE’s extension to the Kill Chain model includes the grouping of tactics (the “why”), the techniques used (the “how”), and the specific implementation procedures the adversary group used to deploy the tactic (the “what”). (p.107)

According to the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the DIB is a worldwide industrial complex of more than 100,000 companies and their subcontractors that provide goods and services to the U.S. military. (p.108)

MITRE ATT&CK framework has become the industry’s de facto standard for representing adversary playbook intelligence. (p.108)

Users of the wiki still need to automate the process of collecting the ATT&CK intelligence and using it to upgrade their internal defenses. They could also streamline the intelligence to make it easier for their red teams and penetration teams to use. (p.108)

The beauty of the kill chain philosophy is that if we are deploying defenses designed to defeat the adversary and not individual unrelated tools, then we have multiple prevention and detection controls deployed across the kill chain looking for known specific adversary activity. (p.109)

…if all the network defenders in the world could have an open source collection of adversary playbook intelligence that’s updated regularly and could be automatically consumed, processed, and tailored for detection and prevention controls for the security stack in place, and automatically deployed in real time. (p.109)

…it would be better if MITRE covered the non‐nation‐state hacking campaigns too: criminals, activists, and mischief makers. Let’s call these the CAMM campaigns. Except for a small handful, the MITRE ATT&CK wiki doesn’t really collect on these. And, as of this writing, there is no equivalent of the MITRE ATT&CK wiki for CAMMs. You can buy it through commercial cyber intelligence companies, but there’s no open‐source equivalent. (p.109)

In their model, they build “activity threads” that combine intelligence and traditional attack graphs into activity‐attack graphs by merging “traditional vulnerability analysis with knowledge of adversary activity.” This is the point where it becomes apparent that the Diamond model is not an alternative to the Lockheed Martin Kill Chain model and the MITRE ATT&CK framework; it is an enhancement. (p.111)

Law enforcement, government spy agencies, the military, and commercial and academic organizations require different kinds of intelligence to be useful in cyberspace. (p.112)

In the APT1 case, the security vendor Mandiant actually hacked back to one of the bad guy’s computers, compromised his camera, and watched his team operate in the room in real time. You can view some of the videos on YouTube. After that operation, Mandiant intelligence analysts had high confidence that the hackers behind APT1 are a Chinese military hacking group belonging to the 2nd Bureau of the People’s Liberation Army (PLA) known as Unit 61398. (p.113)

That means that the total number of adversary campaigns (nation‐state + CAMMs) operating on the Internet on any given day is roughly 256. Since we’re doing some back‐of‐the‐envelope calculations here (see Chapter 6), we know that’s not a precise number, just an educated guess. (p.115)

By 1990, the Forum of Incident Response and Security Teams (FIRST) had become a nonprofit “to bring together incident response and security teams from every country across the world to ensure a safe Internet for all.” (p.118)

As of 2022, there are 657 teams in 101 different countries that belong to FIRST. (p.118)

Small organizations usually accept more risk and don’t have a centralized point to coordinate the activities of multiple groups. IT and security are often done by the same small team. (p.119)

Most don’t try to defeat adversary campaigns across the intrusion kill chain. Instead, they focus on blocking access to technical vulnerabilities that an adversary might use to be successful. (p.120)

Two Cybersecurity Canon Hall of Fame books talk about this history and how to think about this philosophy: Site Reliability Engineering from the team at Google and The Phoenix Project by Gene Kim. (p.124)

The range of options for the security stack is wide. Buyer beware. If you’re doing this, make sure that the SASE vendor’s security stack can handle all of the first principle strategies discussed in this book. (p.126)

By 2022, IT practitioners realized that maybe the SDWAN component of the SASE architecture model wasn’t essential. It was a good idea, and if you have an SDWAN component, then by all means use it. But for everybody else, Gartner offered SSE as an alternative; it’s essentially SASE without the SDWAN meta layer. (p.127)

I expect that most organizations are in the middle somewhere with the SOAR/SIEM model, but they most likely are using it only as a SOC noise reducer and not as an orchestration platform. (p.127)

One last option is to use a secure access service edge (SASE) vendor or its near cousin security service edge (SSE). (p.126)

CTI operations are really nothing more than regular intelligence operations applied to the cyber landscape. (p.128)

According to Professor Vejas Gabriel Liulevicius of the University of Tennessee, “Our earliest evidence of intelligence work comes from the clay tablets of Mesopotamia, and we know from the Bible that spies were used not only by political rivals but also by religious ones in ancient Israel.” (p.128)

Christopher Gabel, writing for the Scholastic blog, defines intelligence operations this way: An intelligence operation is the process by which governments, military groups, businesses, and other organizations systematically collect and evaluate information for the purpose of discovering the capabilities and intentions of their rivals. With such information, or intelligence, an organization can both protect itself from its adversaries and exploit its adversaries’ weaknesses. (p.129)

The process of turning raw information into intelligence products that leaders use to make decisions. (p.129)

The conversion of raw information into something useful—actionable intelligence—is the characteristic that distinguishes a news reporter from an intelligence analyst. (p.133)

In my career, I’ve been a CISO officially three times, and if you count the work I did as the ACERT Commander (the U.S. Army’s CISO), make that four. If I had any success at all in those roles, it was due in part because I was checking in on a regular basis with the organization’s business leaders (customers) to get their feedback on the programs I was working on. (p.135)

Much later, after we started to track nation‐state activity within our networks, we could provide intelligence that the commander could use to plan and utilize to make decisions. (p.136)

Don’t buy them unless they directly support your first principles infosec program and specifically your intrusion kill chain strategy. Point them to the MITRE ATT&CK Evaluation website, a place where vendors prove that their product set can defeat specific known adversary campaigns. (p.136)

Better yet, seek vendors who belong to the Cyber Threat Alliance (a security vendor information security analysis organization [ISAO]). As of this writing, it is a group of some 34 vendors who have agreed to share adversary playbook intelligence with each other so that their customers don’t have to do the work themselves. (p.136)

The CTA’s collection of adversary campaign intelligence is likely the most comprehensive and useful in the industry and can compete head to head with what the U.S. government collects with its intelligence agencies. (p.137)

According to the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit that promotes the development of open standards on the Internet, STIX stands for Structured Threat Information Expression and is an open source language and serialization format used to exchange cyber threat intelligence (CTI). (p.137)

In 1971, the U.S. Air Force contracted James Anderson to run Tiger Teams against their Multiplexed Information and Computing Service (MULTICS) operating system, the precursor to UNIX. (p.140)

Today, TLP (Traffic Light Protocol) is a standard best practice for most sharing organizations. (p.146)

According to the U.S. government’s Cybersecurity and Infrastructure Security Agency (CISA), ISACs are non‐profit, member‐driven organizations formed by critical infrastructure owners and operators to share information between government and industry. (p.146)

…certain ISACs get special attention because of their nature. (p.146)

Multi‐State Information Sharing and Analysis Center (MS‐ISAC): The ISAC for state, local, tribal, and territorial (SLTT) governments Communications ISAC: The ISAC for members from the nation’s major communications carriers Financial Services Information Sharing and Analysis Center (FS‐ISAC): The ISAC for the financial sector Aviation Information Sharing and Analysis Center (A‐ISAC): The ISAC for the aviation industry. (p.146)

Arguably, the FBI founded the first ISAO in 1996, although the community wouldn’t have a name for it until two decades later. They called it the InfraGard National Members Alliance, or InfraGard National, and designed it to facilitate information sharing between law enforcement and the private sector. (p.147)

In 2015, U.S. President Obama signed Executive Order 13691 establishing the ISAO framework that made it legal to share information about cybersecurity incidents without fear of prosecution. (p.147)

According to the department’s official website, CISA coordinates cybersecurity defense for the federal government, acts as the incident response execution arm for the national cyber defense, and owns the responsibility of intelligence sharing. (p.148)

The National Cybersecurity and Communications Integration Center (NCCIC) and the United States Computer Emergency Response Team (US‐CERT) work for CISA. (p.148)

CISA manages four formal information sharing programs, one at the senior leadership level (the Joint Cyber Defense Collaborative) and three at the operator level. (p.148)

Joint Cyber Defense Collaborative (JCDC): Established in August 2021 to enhance collaboration with the private sector, one of the six pillars of the Cyberspace Solarium Commission is a group of public and private‐sector organizations as well as federal and state, local, tribal, and territorial (SLTTs) government entities designed to bring senior leaders from the government and the commercial sector together to collaborate on global issues. Their first success story was how the group responded to the log4J crisis in 2021. (p.148)

The commercial side of the JCDC is a collection of high‐end security and cloud providers (such as AWS, Cisco, Crowdstrike, Microsoft, and Palo Alto Networks; as of this writing, 21 in all), but the information sharing mechanisms are Zoom calls and email. Thirty years after the establishment of the first CERTs, intelligence sharing between the government and the private sector is still mostly manual and ad hoc. (p.149)

What is required is not a fundamental shift in strategy. What is required is an adoption of modern tactics. (p.149)

The one criticism for this vision is that if this new adversary campaign intelligence repository was open to anybody who wanted to access it, that means the bad guys could access it too. It would be easy for them to also consume the intelligence to discover what their potential victims know about how they operate. This would allow them to design schemes to avoid the prevention and detection controls these victims were deploying to block their hacking campaigns. All of that is true enough, but it’s not a real threat. (p.150)

It’s expensive enough to build even new tools like malware and exploits. But it’s exceedingly expensive to build new attack sequences from scratch after a one‐time use. (p.151)

As much as we would like to associate Charming Kitten with the Iranian government, the CTI community’s confidence of that attribution should not be high except for some special circumstances. (p.151)

In 2010, the Department of Homeland Security identified resilience in cyberspace as the ability to adapt to changing conditions and prepare for, withstand, and rapidly recover from disruption. (p.157)

From the Netflix website, Chaos Monkey is a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. (p.159)

Global Resilience Federation has published the first version of the Operational Resilience Framework (ORF). (p.161)

When developing a resilience plan, it is imperative to understand the relationships between people, processes, and technologies. (p.162)

…sometime in the late 1940s, Dutch consultant Ernst Hijams, working for a Canadian consulting firm Leethan, Simpson, Ltd., introduced the idea of linear responsibility charting (LRC). (p.163)

In the early 1950s, as project management evolved, these kinds of project charts became known as RACI charts (for “Responsible, Accountable, Consulted, Informed”) or responsibility assignment matrix (RAM) charts. (p.163)

When the plan goes south during an actual crisis, as it will inevitably do, the important thing is that the team members are so familiar with each other, and the desired outcome is so well understood, that any audibles or improvisations during the event have a decent chance of still leading to the desired result. (p.166)

What I mean by that is the difference between a group of planners and a group of crisis survivors is that the survivors are crystal clear before the instigating event happens about the desired outcome. (p.166)

APT1 hackers had managed to phish the Australian employee, used his account as a beachhead, and then moved laterally through the RSA Security network, escalating privilege, and looking for the data they wanted to steal. In this case, according to Andy Greenberg at Wired, the seed values for the RSA SecurID token product, the two‐factor authentication device were used by “tens of millions of users in government and military agencies, defense contractors, banks, and countless corporations around the world.” With those seed values, APT1 could bypass the two‐factor authentication system in all of them. (p.166)

But then, the RSA Security leadership team executed a crisis communication plan to save the company. According to Greenberg, within a week, “One person in legal suggested they didn’t need to tell their customers.” The CEO at the time, Art Coviello, wasn’t having any of that. “He slammed a fist on the table: they would not only admit to the breach, he insisted, but get on the phone with every single customer to discuss how those companies could protect themselves.” When somebody on the staff suggested they codename the crisis plan as Project Phoenix, Coviello rejected it. “We’re not rising from the ashes. We’re going to call this project Apollo 13. We’re going to land the ship without injury.” (p.167)

Black swan events are so unlikely that you never expect to be affected by one (like a meteor hitting the earth), but the impact is catastrophic when they do happen. (p.168)

The priority is to make them aware of the various resilience tactical measures you already have in place that might mitigate the event, such as incident response, backups, and encryption. (p.171)

As they discuss what they would be doing during each phase of the scenario, the exercise crisis team leader would be interjecting what the rest of the company would be doing based on the current plan using the responsibility assignment matrix. (p.172)

Feel free to invite outside parties to the exercise too, like agents from your local FBI field office and even your auditor team. In the FBI case, you don’t want to be meeting your representatives for the first time during an actual crisis. (p.172)

Give your senior executives a lot of chances to make decisions that further the desired outcome before the actual black swan event happens. (p.173)

The experience was, shall we say, humbling, and 15 years later, that’s the one story my wife loves to tell to family and friends when they start asking questions about my storied cybersecurity career. It goes something like this: Ya, let me tell you about my husband and his big fancy pants cybersecurity career when he lost all of the family data for the past 20 years. (p.175)

The lesson learned here is that if a plan is not exercised, it is almost guaranteed to fail. (p.175)

The FBI said in 2021 that they were tracking at least 100 unique ransomware groups. (p.176)

According to Andy Greenberg in his Cybersecurity Canon Hall of Fame book Sandworm, the total recovery costs for the 2017 NotPetya attacks for all the victims combined topped out at $10 billion. (p.176)

According to the Thales Group, the Spartans around 600 BC used a device called a scytale to code plain text into encrypted messages. (p.180)

By 1917, an American named Edward Hebern had invented an electro‐mechanical machine in which the key was embedded in a rotating disc. The next year, 1918, German engineer Arthur Scherbius invented the Enigma machine using more than one rotor, and the German military adopted it to send coded transmissions during WWI and WWII. (p.180)

In 1976, Whitfield Diffie and Martin Hellman created the Diffie‐Hellman key exchange, making it possible to send encrypted messages without having to share a secret key beforehand. This was huge. It’s called asymmetric encryption, and it’s the main idea behind all modern secure web transactions. (p.181)

Gartner’s David Mahdi and Brian Lowans, in a paper they wrote in 2020, said that security executives “struggle to understand the capabilities and limitations that encryption key management (EKM) solutions provide, and how to properly configure them.” (p.182)

According to Mahdi and Lowans, enterprise key management is on the Gartner slope of enlightenment but 5 to 10 years away from the plateau of productivity. (p.182)

… my all‐time computer science hero, Dr. Clifford Stoll. If there were baseball cards for computer science giants, my collection would include Grace Hopper, Alan Turing, and multiple copies of Doctor Stoll. His Cybersecurity Canon Hall of Fame book The Cuckoo’s Egg was one of the first, and still one of the most influential, cybersecurity books ever published. One of the reason’s his book remains influential more than 30 years later is that he almost single‐handedly invented incident response. The best practices he developed haven’t changed that much in the years since. (p.185)

In 1988, he published a paper from his logbook in the journal Communications of the ACM, which eventually turned into the book he published in 1989. (p.186)

It’s worth noting that the other industry‐recognized incident response framework is from a commercial security vendor training and certification company called SysAdmin, Audit, Network, and Security (SANS). (p.187)

According to NIST’s 2018 “Framework for Improving Critical Infrastructure Cybersecurity,” an update to the original 2014 publication, NIST authors developed their cybersecurity risk management guidance to improve the U.S. government’s critical infrastructure. That said, the guidance is universal enough that it can be applied “by organizations in any sector or community. The Framework enables organizations—regardless of size, degree of cybersecurity risk, or cybersecurity sophistication—to apply the principles and best practices of risk management to improving security and resilience.” It’s essentially an incident response manual. (p.188)

Essentially all models are wrong, but some are useful. —George Box (p.194)

The book that changed my mind about risk forecasting, the fact that it could be done, is called Superforecasting. (p.195)

if your outside‐in forecast says that there is a 20 percent chance of material impact due to a cyberattack this year for all U.S. companies, that’s the baseline. Then, when you do the inside‐out assessment by looking at how well your organization is deployed against our first principle strategies, you might move the forecast up or down depending. (p.197)

Use dragonfly eyes: Consume evidence from multiple sources. Construct a unified vision of it. Describe your judgment about it as clearly and concisely as you can, being as granular as you can. (p.197)

The point to all of this is that it’s possible to forecast the probability of some future and mind‐numbingly complex event with enough precision to make decisions. If the Geezers‐on‐the‐Go can accurately predict the future of the Syrian president, surely a bunch of no‐math CISOs like me can forecast the probability of a material impact due to a cyber event for their organizations within a reasonable margin of error. That’s cybersecurity risk forecasting. (p.198)

One of his guys says that he fronted the bad recommendation about WMD in Iraq, and because of that failure, they don’t deal in certainties anymore. They deal in probabilities. (p.198)

Getting a precise estimate is hard and time‐consuming, but getting an estimate that’s in the right ballpark in terms of order of magnitude is relatively easy and will probably be sufficient for most decisions. (p.203)

According to Sharon McGrayne, author of the 2011 book The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy: By updating our initial beliefs with objective new information, we get a new and improved belief. She says that Bayes is a measure of belief. And it says that we can learn even from missing and inadequate data, from approximations, and from ignorance. (p.206)

…you could argue that Laplace was also a founding father of industrial control systems and operational technology, another critical domain in cybersecurity. (p.206)

…what is the probability that any company would get hit with a material impact cyberattack? This is our first Fermi estimate. (p.213)

…the FBI’s Internet Crime Complaint Center (IC3) said that they received just under a million complaints (847,376) in 2021. Let’s assume that all of those represent material losses. That’s probably not true, but that’s our first assumption to note. But the IC3 also estimated (their assumption) that only 15 percent of organizations actually report their incidents. So, how many total should there be? Doing the math (see Figure 6.5), that means that more than five and a half million (5,649,173) U.S. (p.213)

The number of unreported material complaints is equal to what the total number of incidents IC3 expected occurred in 2021 (5,649,173) minus the known reported complaints (847,376). Doing the subtraction, that number is just over four and half million (Z = 4,801,797).

What that means is that there is a different probability for different values of loss. From the Cyentia report:

To recap, I used two different frequentist data sets. I used the FBI IC3 data and some Fermi estimations to find the initial prior. I then used the Cyentia report to make an adjustment to that initial forecast. The bottom line is that, for Marvel Studios, I’m forecasting the probability of material impact this year as 17 percent, or just under a 1 in 5 chance. Remember, as you discover new evidence about our assumptions or new facts become available, adjust the estimate up or down based on the new information. But for now, we have a new prior of 17 percent. (p.220)

Dr. Tetlock’s book on superforecasting opened my mind to the idea that infosec professionals didn’t need precision answers to make resource decisions about security improvements. (p.226)

It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change. —Charles Darwin. (p.229)

Collects the system stats and Fermi assumptions that allow the calculation of the organization’s risk forecast, the next Bayes’ prior. (p.230)

In 1985, to address the software crisis issue, the U.S. Department of Defense (DoD) adopted the Waterfall model as a requirement for all contractors. (p.231)

They began toying with the Rational Unified Process (1994), Scrum (1995), and Extreme Programming (1996). But in February 2001, 17 programmers traveled to Utah for a long weekend of skiing and discussions about building software. The result was the Agile Manifesto: a rejection of the Waterfall model and an embracement of the idea of producing real, working code as a milestone of progress. (p.232)

The result was the creation of the first Microsoft Security Development Lifecycle (SDLC). (p.232)

In 2003, Dave Wickers and Jeff Williams, working for Aspect Security, a software consultancy company, published an education piece on the top software security coding issues of the day. That eventually turned into the the Open Web Application Security Project (OWASP) Top 10, a reference document describing the most critical security concerns for web applications. (p.232)

According to IBM, DevSecOps is the “integration of security at every phase of the software development.” It’s great that this kind of thinking and deployment is so mature and the security community should embrace the progress. (p.237)

The security community has to attach ourselves to the existing CI/CD pipeline process. In other words, we have to become part of the internal DevOps program, not resist it or build our own. (p.238)

NIST standards products have expanded out of the U.S. federal government and into the commercial sector too because they’re free, vendor agnostic, and normally of the highest quality. (p.239)

According to Nick Inman at Kroll Consulting, about a third of his clients forecast that they will spend greater than 5 percent of revenue to satisfy compliance requirements. (p.240)

But today’s IT environments are systems of systems. We’re in PhD land here. They are complicated, and most of us have no idea how they actually work and what the real dependencies are between all the software modules deployed on all of our data islands. (p.245)

According to Rosenthal, Jones, and Aschbacher in their book Chaos Engineering: System Resiliency in Practice: “A change to input of a linear system produces a corresponding change to the output of the system. Nonlinear systems have output that varies wildly based on changes to the constituent parts.” It’s like that old chestnut that when a butterfly flaps its wings in China, you might end up with a hurricane in the Gulf of Mexico. When the hard drive of a system running a nonessential monitoring app in an AWS region in North America fails but somehow causes a system wide failure, this is what I’m talking about. (p.246)

Chaos engineering is built on the scientific method. DevOps teams develop a hypothesis around steady‐state behavior and run experiments in production to see if the hypothesis holds. If they discover a difference in steady state between the control group and the experimental group on production systems, then they have learned something new. If not, they have gained more confidence in their hypothesis. They use techniques to minimize the blast radius on the production system and monitor the entire experiment carefully to ensure no catastrophic effect, but they have to be on the production system to do it. (p.246)

246

…Netflix routinely runs an app, like Chaos Money, that randomly destroys pieces of their customer‐facing infrastructure, on purpose, so that their network architects understand resilience engineering down deep in their core. (p.246)

The Netflix leadership team very publicly announced its commitment to adopt AWS cloud services and abandon its own data centers. This was a big idea since Amazon just rolled out the service two years before and it wasn’t what anybody would claim as mature yet. (p.247)

Further, that Christmas in 2008, AWS suffered a major outage that prevented Netflix customers from using the new streaming service. In response, Netflix engineers developed their first chaos engineering product in 2010, called Chaos Monkey, that helped them counter the vanishing instance problem caused by the AWS outage. (p.247)

By 2011, Netflix began adding new failure modules that provided a more complete suite of resilience features. Those modules eventually became known as the Netflix Simian Army and include colorful names like Latency Monkey, Conformity Monkey, and Doctor Monkey, just to name three. (p.248)

Netflix shared the source code for Chaos Monkey on GitHub in 2012, and by 2013, other organizations started playing with the idea. By 2014, Netflix created a new employee role (chaos engineer) and began working on ideas of reducing the blast radius of planned injected failures. (p.248)

In terms of first principles, the CSO’s job description should be to discover unknown faults in the system that will cause material damage. (p.248)

…for big Silicon Valley companies that deliver services from around the world (the Netflixes, the Googles, the LinkedIns, etc.) and for most Fortune 500 companies, chaos engineering is something to consider. (p.250)

Writing long books is a laborious and impoverishing act of foolishness: expanding in 500 pages an idea that could be perfectly explained in a few minutes. A better procedure is to pretend that those books already exist and to offer a summary, a commentary. —Jorge Luis Borges (p.255)

The absolute cybersecurity first principle is this: Reduce the probability of material impact due to a cyber event over the next three years. (p.257)

Every one of them has the potential to reduce the probability of material impact due to a cyber event. Being able to measure that impact (risk forecasting) is an essential skill for all infosec practitioners but especially for senior cybersecurity leaders. (p.261)

For the people who no one imagines anything of and who will do the things that no one can imagine, it’s time to step up. —Rick Howard, author. (p.294)

To Seneca who has given me the best advice, like: life is very short and anxious for those who forget the past, neglect the present, and fear the future. I would dedicate it to my family, but they will never read this. —Steve Winterfeld, editor (p.294)

We wrote this to satisfy mobs,
though me and my friends are all swabs.
So thanks to you hackers,
And bad cyber actors,
Without whom we wouldn’t have jobs. —Brandon Karpf, editor (p.294)

Thanks

I would like to thanks Rick Howard for his great work. I believe that his proposal could be a main principle of the cybersecurity industry. His addional study materials can be found in the book’s official website.

Social Media

LinkedIn