In my last post, I talked a little about how cyber security is a human problem and can be described in a way that has nothing to do with technology.  This post will explore how ignoring this fact will always lead to (and so far, pretty much has) a strategic cyber security loss by creating an unacceptable offensive advantage.

Fundamentally, there are five often ignored truths that I’ll use to make my case:

1. Cyber security is a problem that occurs over unbounded time (thanks Win Schwartau). In other words, measuring state at any single point doesn’t provide a complete picture of exactly what your risk actually is. Just for example: Time to detection, time to compromise, how often and when changes occur, etc are all problems that cannot be described as single points:

Cyber security is actually a *rate* problem overall: How many errors occur per time period and how many resources does it take to address the errors?

A Strategic win is when the relationship between the error rate and the mitigation rate constantly remains at an acceptable level or better.

2. Complexity is constantly increasing. We’re, collectively,  always building new systems and adding new features at a frenetic pace.  This means that:

As complexity increases, if the error rate stays the same, resources to mitigate must increase unless those resources become more efficient.

3. Resources are limited. At some point, you cannot increase the number of resources and so:

Since resources are limited, either the error rate must be adjusted or the resources be made infinitely more efficient (to account for constantly increasing complexity).

4. Human behavior defines every aspect of security state. Just for example:

DEVELOPERS build TECHNOLOGY 
ENGINEERS build TECHNOLOGY 
ARCHITECTS design TECHNOLOGY 
IT STAFF change TECHNOLOGY 
AUTHORIZED PEOPLE operate TECHNOLOGY
SECURITY STAFF protect TECHNOLOGY
EXECUTIVES/OWNERS require TECHNOLOGY

5. Quoting from a previous post, humans hopes, dreams, passions, fears, biases, moods, and biochemistries dictate what they do. They’re not perfect. They make mistakes. In other words:

Human behavior is what causes the cyber security errors that result in compromise. 

Defensive Resource efficiency is also negatively affected by the rate of human behavior errors

Therefore, if the rate at which people (“Users”) make mistakes is not managed and their activities are not subject to a certain level of long term quality assurance and control, the increasing complexity of systems assures that errors will eventually increase beyond the levels to which available defensive resources can mitigate them, even in the face of tactical efficiency improvements.

Adjusting the effectiveness of resources (by automating a patching program, or adding malicious code detection, for example) does give a boost to level of defense capability from that point on.  But, because resources will max out and because ultimately effectiveness is bounded at the top by error rate (which is a human problem), defense capability will still eventually flatten out against vulnerabilities introduced by increasing complexity and an unmanaged error rate and a strategic loss will occur.

If, however, if Error Rate is reduced (through adjusting user behavior and turning it into culture), the rate at which vulnerabilities are introduced can be kept in enough check to allow for defensive capabilities to be effective – even in the face of increasing complexity. (Assumed: Number of humans to be changed is much more static than level of complexity)

Once the strategic distance between vulnerabilities and defensive capabilities is no longer growing with complexity over time, measures such as automated patching programs, malicious code detection,etc, etc can be used to change the day to day relationship between offense and defense, allowing for the potential of an acceptable level of risk to be achieved as a function of rate, not a moment in time.

….The aristocrats.

About these ads