You are currently browsing the monthly archive for May 2012.

In my last post, I talked a little about how cyber security is a human problem and can be described in a way that has nothing to do with technology.  This post will explore how ignoring this fact will always lead to (and so far, pretty much has) a strategic cyber security loss by creating an unacceptable offensive advantage.

Fundamentally, there are five often ignored truths that I’ll use to make my case:

1. Cyber security is a problem that occurs over unbounded time (thanks Win Schwartau). In other words, measuring state at any single point doesn’t provide a complete picture of exactly what your risk actually is. Just for example: Time to detection, time to compromise, how often and when changes occur, etc are all problems that cannot be described as single points:

Cyber security is actually a *rate* problem overall: How many errors occur per time period and how many resources does it take to address the errors?

A Strategic win is when the relationship between the error rate and the mitigation rate constantly remains at an acceptable level or better.

2. Complexity is constantly increasing. We’re, collectively,  always building new systems and adding new features at a frenetic pace.  This means that:

As complexity increases, if the error rate stays the same, resources to mitigate must increase unless those resources become more efficient.

3. Resources are limited. At some point, you cannot increase the number of resources and so:

Since resources are limited, either the error rate must be adjusted or the resources be made infinitely more efficient (to account for constantly increasing complexity).

4. Human behavior defines every aspect of security state. Just for example:

DEVELOPERS build TECHNOLOGY 
ENGINEERS build TECHNOLOGY 
ARCHITECTS design TECHNOLOGY 
IT STAFF change TECHNOLOGY 
AUTHORIZED PEOPLE operate TECHNOLOGY
SECURITY STAFF protect TECHNOLOGY
EXECUTIVES/OWNERS require TECHNOLOGY

5. Quoting from a previous post, humans hopes, dreams, passions, fears, biases, moods, and biochemistries dictate what they do. They’re not perfect. They make mistakes. In other words:

Human behavior is what causes the cyber security errors that result in compromise. 

Defensive Resource efficiency is also negatively affected by the rate of human behavior errors

Therefore, if the rate at which people (“Users”) make mistakes is not managed and their activities are not subject to a certain level of long term quality assurance and control, the increasing complexity of systems assures that errors will eventually increase beyond the levels to which available defensive resources can mitigate them, even in the face of tactical efficiency improvements.

Adjusting the effectiveness of resources (by automating a patching program, or adding malicious code detection, for example) does give a boost to level of defense capability from that point on.  But, because resources will max out and because ultimately effectiveness is bounded at the top by error rate (which is a human problem), defense capability will still eventually flatten out against vulnerabilities introduced by increasing complexity and an unmanaged error rate and a strategic loss will occur.

If, however, if Error Rate is reduced (through adjusting user behavior and turning it into culture), the rate at which vulnerabilities are introduced can be kept in enough check to allow for defensive capabilities to be effective – even in the face of increasing complexity. (Assumed: Number of humans to be changed is much more static than level of complexity)

Once the strategic distance between vulnerabilities and defensive capabilities is no longer growing with complexity over time, measures such as automated patching programs, malicious code detection,etc, etc can be used to change the day to day relationship between offense and defense, allowing for the potential of an acceptable level of risk to be achieved as a function of rate, not a moment in time.

….The aristocrats.

A year ago, a Kelley Bray and I gave a talk at B-Sides Chicago on “A Squishy Model of Cyber Security”.  Recently, there have been some posts on the SIRA mailing list discussing different perspectives on the importance of users and training and security and all that vs other controls like patching or malicious code detection and it made me decide to convert that talk into a blog post.

This is my first post of two on the topic.  The second will be more “me” and will specifically outline the difference in looking at cyber security as a strategic vs tactical problem and how the implications of the “user” conversation.  (Caveat: A lot really depends on how you define “users”, so I’m careful to here.) Enjoy and thanks to everyone who’s contributed to the shape of my brain.

Click the pictures to make them big enough to read.

What is a network?

Let’s pretend it’s newly birthed whole, untouched.

—-

Well, that’s almost a network. We still need users.

Who are Users? They’re everyone who can affect your network.

Don’t agree? All basic attributes are the same…

The only things that change are implementation details: roles, motivations, environment

—-

This is a more complete picture

These users – in their roles – affect equipment.

That causes the computers to be in a given state.

Even with environmental constraints, like hardware, baseline configs, security configs, etc…

human actions occur before network built

 

—-

To be secure, we can influence the decisions made, or put in place technical controls

Both options affect the same logical chains of actions, just in different places. 

—-

But do we even need to describe technical controls?

Because honestly, computers are just proxies for the will and desire of users.

Which sets of users is responsible for computer action just depends on where in time you’re looking.

—-

Putting in technical controls requires influencing decisions made by users

Therefore it’s pretty clear that user activities can be used independent of technology to describe security.

—-

Specifically, if time is collapsed, authorized user roles and their associated attributes are the network:

Which leades us to interesting implications:

You can structure activities into common role activities in a way that will help you manage and manipulate the human squishy stuff.

The specific attributes are out of scope here, but take home the idea that at a non-tech specific level, they’re  finite, discrete.

This means humans can be addressed as potential state attributes directly.

And so no matter what you do, if you do not influence user behavior, you will never be secure.

What are the implications? See next blog post!

With all the blah blah blah going on about CISPA, I’ve managed to keep my mouth shut about it for awhile, but it turns out I do have something to contribute to the dialogue (or, I think I do :) ).

I’m not going to review the language of the bill – I’m sure it’s terrible. Most cyber legislation is. It can’t not be. They all go too far, lack clarity of language, introduce unforeseen escalations of government rights, etc.

There’s no need to go over the givens. :)

So, then, what? Well, after I finally read CISPA and the surrounding reporting, what I noticed was that very few people seem to understand that the bill didn’t come out of nowhere. The language in it, the motivations behind it, the structure of the bill, etc…all of it… completely reflects the information sharing discussion that’s been going on between those engaged in public/private partnership cyber security activities for years.  It’s not just a random congressional fart.  Anyone who has been part of that discussion should recognize the bill as an old …if not friend…sparring partner.

For those who don’t know, there is, in this space, an institutionalized gridlock in the debate about information sharing.  CISPA clearly is an attempt to remedy this very, very specific gridlock. It’s not a general cyber security bill. It’s not even a general information sharing bill.. It is designed to address the perspective that the government has information it won’t share, that clearances have been roadblocks, and that legal ambiguities have prevented sharing.

Now, while I happen to think that some of these are in fact roadblocks, I also know CISPA doesn’t touch the heart of what the most severe and core information sharing problems are. But, unfortunately, I’m in the minority. A great number of otherwise intelligent people do believe in what it’s trying to accomplish, typically terrible language notwithstanding.

Maybe no one else finds this worth noting, but I at least thought it was unusual that the structure of the existing conversation is so clearly reflected in a piece of legislation…

(The following was written for the upcoming NESCO Energy Sector Cyber Security Risk Management Town Hall program book.)

I’ve seen people fly, I’ve seen birds fly, I’ve seen a horse fly, I’ve even seen a house fly, but I’ve never seen an organization fly. And, as silly as it might seem, this really does have significant implications for managing cyber risk – especially when we look incredulously at the many public compromises and wonder “why does it keep happening?”.

A good way of approaching that question is to look at where cyber risk management is “succeeding”. Succeeding? Yes! Cyber risk is, in fact, being managed – and quite well! If you doubt this, you might need to ask yourself important questions like “Which risks are being managed?” and, more importantly, “Which risks to *whom*?”

What I mean to say is that, while organizations can have an effect on the world around them, they can’t actually be seen or touched. They’re not tangible and they can’t…”fly”. Instead, they are the conceptual sum of the many varied decisions of individual people. These conceptual sums are inanimate; they cannot – and do not –feel risk. Instead, it is their executives, owners, employees, and customers who feel risk. Their soft squishy human hopes, dreams, passions, fears, biases, moods, and biochemistries ultimately drive organizational “risk tolerance” and we should never forget it. Here, it’s crucial to understand that people almost exclusively put risks to themselves ahead of all others (including an organization’s).

So, then, if the “collective” risks to individuals do trump all else, where do we look for ownership and resolution?

Well, some would say “users”, but do “users” (or “individual performers”) care more about meeting their boss’s expectations or saving the intangible organization from invisible adversaries and hidden costs without direction? Probably the former.

Further, while “the bosses” who set these expectations might see that the cyber problem exists, their primary risks resolve around meeting their own senior leadership’s expectations as well.

Ok, but isn’t IT Security key to cyber risk management? Not really. IT Security, like any other group, must align themselves with their senior leaders’ and executives’ priorities. Without that alignment they hold no sway or effect.

So, then, it’s on Executives. Senior leaders, what drives your risk appetites?

I ask because cyber risk management is a hard problem. Aren’t you safest if you follow best practices and “buy Cisco”? Ultimately, if you do and your organization gets compromised, what happens to you? Most likely very little – you did your best after all. Is it even in your best interest, then, to know cyber is a hard problem? If you’re aware that best practices have been failing like communism, aren’t you then obligated to come up with solutions of your own? Wow. No way. It’s best to believe the hype; best to buy Cisco; best to keep transferring the risk.

Intentional ignorance (or lack of “awareness”) isn’t just bliss, it also reduces risk to those people directing organizations and dictating the priorities of their human building blocks.

Hey! Long time no post.  As a quick follow-up to the last few posts, our Cyber Security in Transportation Conference ended up with 300+ attendees from industry and government!. It was fun, educational, and wildly successful. 

Now, I’m back here to encourage you (if you have a personal or professional interest in Energy Critical Infrastructure Cyber Security and/or Risk Management) to attend the Security Risk Management Practices for Electric Utilities Town Hall in New Orleans this May 30-31 put on by NESCO. 

I’ll be speaking as part of a panel and am looking forward to some fantasic conversations! More info below:

 —

Electric Sector Cybersecurity Town Hall

Security Risk Management Practices For The Electric Sector

Presented by: National Electric Sector Cybersecurity Organization

Hosted by: Entergy – http://www.entergy.com/

Security risk management is a topic of continued discussion in the electric sector.

It can be a daunting task and often overwhelming when faced with trying to

implement the many security risk management models available.

This town hall style meeting brings together many of the industries leading

security professionals to explore security risk management practices for the

electric sector in depth.

You will have the opportunity to participate in open discussions with security risk

experts, hear about solutions implemented by utility security teams and learn

more about industry specific security risk management guidelines.

You are invited to be part of this important meeting.

For more information click here http://nescotownhall2012.eventbrite.com/ or call

Abbie Trimble at 503-446-1223 or abbie@energysec.org

 

Presenters

William N. Bryan – Manage Risk Before It Manages You

US Department of Energy, Deputy Assistant Secretary, Infrastructure Security

and Energy Restoration

Matthew Light – Overview of the Cyber Security Risk Management Process

U.S. Department of Energy, Infrastructure System Analyst

Patrick Miller – Electric Sector Risk Management – Past, Present and Future

National Electric Sector Cyber Security Organization (NESCO), Principal

Investigator

Katie Jereza – Aha! Valuable Tools for Managing Supply Chain Risk

Energetics Incorporated, Program Director/ U.S.Resilience Project, Liaison

 

Moderator

Brandon Dunlap – Brightfly, Managing Director of Research

 

Panelists

Prudence Parks, United Telecom Council, Director of Government Affairs and

Legislative Council

Robert Coles, National Grid, CISO & Head of Digital Security and Risk

Dave Lewis, AMD, Senior Information Security Analyst

Ben Tomhave, Lockpath, MS, CISSP, Principal Consultant

Craig Miller, NRECA, Senior Program Manager

Jack Whitsitt, TSA/DHS, Team Lead, Cyber Security Awareness and Outreach

Louis Dabdoub III, Entergy, Manager, Corporate Security

Mark Ellister, Eugene Water and Electric Board, Sr. Security Specialist

For more information click here http://nescotownhall2012.eventbrite.com/ or call

Abbie Trimble at 503-446-1223 or abbie@energysec.org

Presented by the National Electric Sector Cyber Security Organization(NESCO),

a program of EnergySec

Follow me on Twitter

My Art / Misc. Photo Stream