You are currently browsing the tag archive for the ‘cybersecurity’ tag.

This year, thanks largely to Josh Corman, I had the opportunity to speak at Source Boston.  It was an interesting experience and the first time in a couple of years I’ve had the chance to talk in front of a general security/hacker audience (Bsides Chicago was the last) – vs one focused on critical infrastructure specifically (like a  NATO conference in Tbilisi, Georgia).   Thanks Josh. Also, Thanks Jen Giroux for helping my lens myself – your perspective was crucial.

More important than my talk are the slides themselves.  I managed to put together one of the only presentations you’ll find with relatively short summary of the critical infrastructure landscape that also provides some framing help and advice on how to approach the topic more effectively (See this post for a longer treatment of the executive order sections).  It’s meant to have a strong verbal component, so if something seems incomplete or your need more information, feel free to ping me.  I hope you enjoy. (PDF HERE)

(Consider viewing these full screen)

Advertisements

This is part of a larger post I’m doing for work. The quality assurance concepts are described in more depth in a previous post. I Will update this later with diagrams and etc. which will distinguish it further from the older posts. But, for now consider this a draft:

FOUNDATONAL CONCEPTS

Cybersecurity is a quality assurance problem that occurs unbounded over time; what we are tackling is not a matter of fixing individual errors, but reducing the frequency of them to a level where we can continuously afford to remedy the ones that do happen. Multiplied by the increasing number of cyber systems we develop or change every year, the errors requiring mitigation are increasing constantly and will exceed defensive resources without a reduction in the rate at which they are made.

We must reframe the discussion to account for this “quality assurance” perspective if there is any hope of improving the quality of our cybersecurity posture.  Direct experience has shown at least four areas requiring focused development to successfully broaden the cybersecurity dialogue:

1. Success Criteria: To date, much of the cybersecurity conversation has lacked coherent, actionable risk reduction objectives. The development of a “Common Operational Picture”, for example, is only a tool to reduce risk, not a strategic goal.  Similarly, while a Minimum Level of Hygiene is a useful description of a suite of efforts, it does not speak to what the specific success of those efforts would look like.  Instead, success criteria should speak to business and national security priorities to be enabled at defined performance levels in the face of cybersecurity errors.  If we can begin to describe objectives in this way, we will be more successful at building mechanisms to achieve them.

2. Holistic Inclusion: Traditionally the area of IT or Security Specialist staff, an analysis of the timeline on which cybersecurity problems occurs leads to the observation that those roles have far less of an impact than those who are not specialized.  Because of their role in defining success criteria and operating cyber systems, business leaders, operations staff, managers, procurement officers, and many others have far more impact on the state of cybersecurity over time than those in roles who focus on it.

3. Common Framing:  It is very difficult to solve a problem as a group when those in the group, because of their backgrounds, have different ideas of what the problem actually is.  Cybersecurity is a complicated, multi-dimensional problem which must be solved at several discrete, if interdependent, levels.  Often, those who work in one level are not aware of the others or how they fit in.  If asked what cybersecurity means, people in different roles may have wildly different answers.  Even explaining what one cybersecurity tool or framework does versus another requires a common framing of cybersecurity that experience has shown to be lacking in most cases.  Any national initiatives should take into account (at a minimum) this problem or actively work to solve it.

4. Trust: In today’s world, businesses are part of a larger system of industry, national, and world proportions. While competition is one aspect of that system, so is cooperation.  Often mistakenly called trust, this focus area, should instead begins to carve out a formal space and culture for competitive peers to operate cooperatively in the interest of common success.

In my last post, I talked a little about how cyber security is a human problem and can be described in a way that has nothing to do with technology.  This post will explore how ignoring this fact will always lead to (and so far, pretty much has) a strategic cyber security loss by creating an unacceptable offensive advantage.

Fundamentally, there are five often ignored truths that I’ll use to make my case:

1. Cyber security is a problem that occurs over unbounded time (thanks Win Schwartau). In other words, measuring state at any single point doesn’t provide a complete picture of exactly what your risk actually is. Just for example: Time to detection, time to compromise, how often and when changes occur, etc are all problems that cannot be described as single points:

Cyber security is actually a *rate* problem overall: How many errors occur per time period and how many resources does it take to address the errors?

A Strategic win is when the relationship between the error rate and the mitigation rate constantly remains at an acceptable level or better.

2. Complexity is constantly increasing. We’re, collectively,  always building new systems and adding new features at a frenetic pace.  This means that:

As complexity increases, if the error rate stays the same, resources to mitigate must increase unless those resources become more efficient.

3. Resources are limited. At some point, you cannot increase the number of resources and so:

Since resources are limited, either the error rate must be adjusted or the resources be made infinitely more efficient (to account for constantly increasing complexity).

4. Human behavior defines every aspect of security state. Just for example:

DEVELOPERS build TECHNOLOGY 
ENGINEERS build TECHNOLOGY 
ARCHITECTS design TECHNOLOGY 
IT STAFF change TECHNOLOGY 
AUTHORIZED PEOPLE operate TECHNOLOGY
SECURITY STAFF protect TECHNOLOGY
EXECUTIVES/OWNERS require TECHNOLOGY

5. Quoting from a previous post, humans hopes, dreams, passions, fears, biases, moods, and biochemistries dictate what they do. They’re not perfect. They make mistakes. In other words:

Human behavior is what causes the cyber security errors that result in compromise. 

Defensive Resource efficiency is also negatively affected by the rate of human behavior errors

Therefore, if the rate at which people (“Users”) make mistakes is not managed and their activities are not subject to a certain level of long term quality assurance and control, the increasing complexity of systems assures that errors will eventually increase beyond the levels to which available defensive resources can mitigate them, even in the face of tactical efficiency improvements.

Adjusting the effectiveness of resources (by automating a patching program, or adding malicious code detection, for example) does give a boost to level of defense capability from that point on.  But, because resources will max out and because ultimately effectiveness is bounded at the top by error rate (which is a human problem), defense capability will still eventually flatten out against vulnerabilities introduced by increasing complexity and an unmanaged error rate and a strategic loss will occur.

If, however, if Error Rate is reduced (through adjusting user behavior and turning it into culture), the rate at which vulnerabilities are introduced can be kept in enough check to allow for defensive capabilities to be effective – even in the face of increasing complexity. (Assumed: Number of humans to be changed is much more static than level of complexity)

Once the strategic distance between vulnerabilities and defensive capabilities is no longer growing with complexity over time, measures such as automated patching programs, malicious code detection,etc, etc can be used to change the day to day relationship between offense and defense, allowing for the potential of an acceptable level of risk to be achieved as a function of rate, not a moment in time.

….The aristocrats.

A year ago, a Kelley Bray and I gave a talk at B-Sides Chicago on “A Squishy Model of Cyber Security”.  Recently, there have been some posts on the SIRA mailing list discussing different perspectives on the importance of users and training and security and all that vs other controls like patching or malicious code detection and it made me decide to convert that talk into a blog post.

This is my first post of two on the topic.  The second will be more “me” and will specifically outline the difference in looking at cyber security as a strategic vs tactical problem and how the implications of the “user” conversation.  (Caveat: A lot really depends on how you define “users”, so I’m careful to here.) Enjoy and thanks to everyone who’s contributed to the shape of my brain.

Click the pictures to make them big enough to read.

What is a network?

Let’s pretend it’s newly birthed whole, untouched.

—-

Well, that’s almost a network. We still need users.

Who are Users? They’re everyone who can affect your network.

Don’t agree? All basic attributes are the same…

The only things that change are implementation details: roles, motivations, environment

—-

This is a more complete picture

These users – in their roles – affect equipment.

That causes the computers to be in a given state.

Even with environmental constraints, like hardware, baseline configs, security configs, etc…

human actions occur before network built

 

—-

To be secure, we can influence the decisions made, or put in place technical controls

Both options affect the same logical chains of actions, just in different places. 

—-

But do we even need to describe technical controls?

Because honestly, computers are just proxies for the will and desire of users.

Which sets of users is responsible for computer action just depends on where in time you’re looking.

—-

Putting in technical controls requires influencing decisions made by users

Therefore it’s pretty clear that user activities can be used independent of technology to describe security.

—-

Specifically, if time is collapsed, authorized user roles and their associated attributes are the network:

Which leades us to interesting implications:

You can structure activities into common role activities in a way that will help you manage and manipulate the human squishy stuff.

The specific attributes are out of scope here, but take home the idea that at a non-tech specific level, they’re  finite, discrete.

This means humans can be addressed as potential state attributes directly.

And so no matter what you do, if you do not influence user behavior, you will never be secure.

What are the implications? See next blog post!

(The following was written for the upcoming NESCO Energy Sector Cyber Security Risk Management Town Hall program book.)

I’ve seen people fly, I’ve seen birds fly, I’ve seen a horse fly, I’ve even seen a house fly, but I’ve never seen an organization fly. And, as silly as it might seem, this really does have significant implications for managing cyber risk – especially when we look incredulously at the many public compromises and wonder “why does it keep happening?”.

A good way of approaching that question is to look at where cyber risk management is “succeeding”. Succeeding? Yes! Cyber risk is, in fact, being managed – and quite well! If you doubt this, you might need to ask yourself important questions like “Which risks are being managed?” and, more importantly, “Which risks to *whom*?”

What I mean to say is that, while organizations can have an effect on the world around them, they can’t actually be seen or touched. They’re not tangible and they can’t…”fly”. Instead, they are the conceptual sum of the many varied decisions of individual people. These conceptual sums are inanimate; they cannot – and do not –feel risk. Instead, it is their executives, owners, employees, and customers who feel risk. Their soft squishy human hopes, dreams, passions, fears, biases, moods, and biochemistries ultimately drive organizational “risk tolerance” and we should never forget it. Here, it’s crucial to understand that people almost exclusively put risks to themselves ahead of all others (including an organization’s).

So, then, if the “collective” risks to individuals do trump all else, where do we look for ownership and resolution?

Well, some would say “users”, but do “users” (or “individual performers”) care more about meeting their boss’s expectations or saving the intangible organization from invisible adversaries and hidden costs without direction? Probably the former.

Further, while “the bosses” who set these expectations might see that the cyber problem exists, their primary risks resolve around meeting their own senior leadership’s expectations as well.

Ok, but isn’t IT Security key to cyber risk management? Not really. IT Security, like any other group, must align themselves with their senior leaders’ and executives’ priorities. Without that alignment they hold no sway or effect.

So, then, it’s on Executives. Senior leaders, what drives your risk appetites?

I ask because cyber risk management is a hard problem. Aren’t you safest if you follow best practices and “buy Cisco”? Ultimately, if you do and your organization gets compromised, what happens to you? Most likely very little – you did your best after all. Is it even in your best interest, then, to know cyber is a hard problem? If you’re aware that best practices have been failing like communism, aren’t you then obligated to come up with solutions of your own? Wow. No way. It’s best to believe the hype; best to buy Cisco; best to keep transferring the risk.

Intentional ignorance (or lack of “awareness”) isn’t just bliss, it also reduces risk to those people directing organizations and dictating the priorities of their human building blocks.

UPDATE: Please see this link for the most current agenda. The one in the post is outdated: https://sintixerr.files.wordpress.com/2011/10/cyber-program_1020.pdf

So, one of the things I get to do as part of my job which has been pretty exciting is to put together the agenda for our 2nd annual Cyber Security in Transportation summit. It’s happening November 1 & 2 this year in the DC area and is going to be full of outstanding talks for all ages and backgrounds. ;) The summit is aimed at executives and decision makers from within the transportation industry who might be effected by cyber security or whos actions may affect the security of their organizations. We’re covering general cyber security themes as well as transportation specific ones. If you’re in the transportation sector – pipeline, aviation, freight rail, mass transit, highway & motor carrier – and want to attend, let me know at sintixerr@gmail.com.

The tentative agenda currently looks like this:

Summit Schedule (Click for Larger)

 

 

 

 

 

 

 

 

 

 

 

 

 

AGENDA DESCRIPTIONS

Industry Case Studies

Four discussions of transportation-specific cyber security concerns and perspectives: Incidents, Best Practices that worked, Lessons Learned, Soap Box Scenarios , etc.

Public/Private Partnership

Sector Collaboration

Based on outcomes of this summer’s Transportation Cyber Security Exercise

 

Panel: Maritime

Representatives of the Maritime mode will discuss  topics of common interest

 

TBD DHS

 

General Cyber Security Awareness Talks & Panels

Panel: Offensive Perspectives

Non-technical perspectives from well-known offensive researchers

Panel: Threats in the News

Current threats in the news such as APT, Stuxnet, and Anonymous

 

Panel: Executive Perspectives

Concerns and solutions in today’s environments

 

Panel: Risk Management

Cybersecurity impacts on business risk management

 

Verizon Data Breach Incident Report

An empirical overview of current trends

Social Networking

Ups, downs, concerns and impacts of social networking on cyber security

Users and Awareness

Exploration of the most critical aspect of cyber security: Users

 Verizon Data Breach Incident Report: Bryan Sartin/Verizon Business   
Industry Case Study 1: Boeing Mike Garrett/Boeing   
Panel: Offensive Perspectives: Kevin Finisterre Ruben Santamarta  Mark Fabro
Social Media: Patrick Gray/CISCO   
Panel: Maritime Stakeholders  (USCG & Industry)   
Panel: Threats in the News: Scot Terban (Anonymous) Liam O Murchu / Symantec (Stuxnet)  (APT) 
Industry Case Study 2: Transportation Control Systems Darryl Song/Volpe   
Keynote:  Vice Admiral Parker/ USCG   
DHS     
Panel: Executive Perspectives: Amit Yoran/Netwitness Gus Hunt/CTO of CIA  
Sector Collaboration   
Users & Awareness Mike Murray/MAD Security      
Panel: Risk Management Jack Johnson/PWC Russell Thomas  Jack Whitsitt
  

Follow me on Twitter

My Art / Misc. Photo Stream

Advertisements