So the results of the Mozilla Delphi project are out. I was one of the panelists – alongside some pretty well known names like Jane Hall Lute, Bruce Schneier, and some other big etc.’s.   You can find it here:

https://blog.mozilla.org/netpolicy/files/2015/07/Mozilla-Cybersecurity-Delphi-1.0.pdf

And some background here:

https://wiki.mozilla.org/Netpolicy/Cybersecurity_Delphi#Report_Now_Published

“Mozilla’s Cybersecurity Delphi 1.0 is a step to address this gap, by identifying and prioritizing concrete threats and solutions. Through the iterative structure of the Delphi method, we will build expert consensus about the priorities for improving the security of the Internet—infrastructure to protect public safety, sustain economic growth, and foster innovation. The Delphi method offers unique benefits in this context because it aggregates the input of a diverse, broad set of voices, using a discrete and defined process with a clear, fixed end point and a mechanism for non-attribution to encourage open and through engagement. “

Im still processing the results, many of which I adamantly disagree with, but what I think the report mainly shows is that “cybersecurity” isn’t a thing that exists outside of specific sets of contexts and perspectives and goals. It just goes…poof…and disappears as a concept if it’s not bracketed by material constraints. The all over the board nature of the responses seems to demonstrate that (even though Mozilla did a good job creating a narrative around them).

That said, I think there are some interesting points in the document and that it’s worth a read – at the very least you’ll get to see some of the filter biases of some very smart people (obviously including my own).  And those are worth knowing, because very often our human fears and backgrounds and perceptions are not reflective of actual risks and needs.

 

Today I saw an announcement for another cybersecurity leadership council filled with the usual suspects:

https://www.uschamber.com/press-release/us-chamber-announces-launch-cybersecurity-leadership-council?utm_source=Facebook&utm_medium=Wallpost&utm_campaign=Status

“When it comes to the cybersecurity of our networks, the private sector has the capabilities and the market has produced good solutions. Now we need to focus on mitigation of cyber risks through cross-sector information sharing efforts, public and private partnerships, and the improvement of cyber hygiene of businesses of all sizes,” said Howard Schmidt, a partner at Ridge-Schmidt Cyber, and chairman of the council.

Sigh. Let me give this to you all straight:

First, our cybersecurity exposure is fundamentally created by how businesses go about making money. It’s about corporate discipline, perception, culture, value chains, investment strategies, procurement, marketing, communication, trust, operational quality, etc. Cybersecurity state is NOT primarily a function of anything that happens in a CISO’s office, It has very little to do with Information Sharing (as typically defined in this conversation), and Public Private Partnership success depends on having some sort of comprehensive problem space model which precedes conclusions (and the language provided starts with conclusions without a consensus problem space model anywhere).

CISO’s activities are done as a result of a business’s actual exposure – created OUTSIDE of the CISO’s office – business perception of its risk – created by its culture – and actual threat actors – which neither the business nor the CISO’s office directly controls.  Therefore any conversation or effort centered on how to do “Cybersecurity” better will, almost by definition, fail.  “Cybersecurity”, if defined as “activities centered around the CISO’s office and levers to enable the CISO’s office”, has little to no influence or control over the business risk level created by ICT (Internet Connected Technology) use  because it neither controls nor influences ANY of the primary environmental factors.

The problem is, since coordinating solutions on the non-CISO’s office problem space (exposure creation) requires dealing directly with how businesses make money, it’s a really tough nut to crack (legally, politically, financially, culturally, etc) and few are willing to do it. It’s MUCH easier to focus on the CISO’s office – even at the expense of success. And, besides, we have a whole security industry telling us that another box or service will solve the problem.  (For those not following along, what they mean by “solve the problem” is “hold the line until you slowly drown in the cascading consequences of rising complex conflict interactions online”)

Further, technically, even if we did move the conversation to “how we do business in general and how that creates exposure” – which NONE of the language around the new group even smells like it might be saying – the way we build IT and OT infrastructure is not securable to the level we desire it to be for the cost we wish to pay. Full stop.  This is not a “security” problem, this is a mathematical complexity problem that has to do with error rate and organizational competency across time and disciplines.  Moving further on “cybersecurity” without changing the surrounding technical environment – transformationally, not evolutionarily – is an abject waste of time.

Anyone telling you different is selling you something, ignorant, or has unfortunate perspective blinders on.

I wrote the following up in response to a mailing list thread on some sort of anti-OPM petition campaign. I think the original email and a subsequent follow-up from me to a bunch of replies deserve repeating here:

Part 1:

I’m calling shenanigans. Why are we picking on OPM???

We’re seeing numbers like “76% of organizations breached in past 12
months”.  Or “97% of networks have been breached” etc (the numbers are
coming from all over – and back up anecdotal evidence – so whichever
source you do or don’t believe, it’s still “a whole damn lot”).

Many of these organizations do have sucky security. Many … do not.
Many are, actually, pretty good at it.

What does this mean? It means that, in today’s world, keeping your
network clean, over time, is next to impossible.  It requires a level
of competency and diligence that few organizations have in any other
respect than their core business competencies.  It also means that
bemoaning the state of government cybersecurity over that of private
industry cybersecurity is just…talk.  *Everyone* is getting owned,
at some point or another.

Publicly flaying OPM does absolutely nothing good and it harms our
collective ability to get better in the future.

How/Why?

Because one of the major roadblocks to real improvement is the infrequency of organizations willingly
admitting – publicly or even, often, to themselves – that they’re having a
really tough time with security…..mainly because exactly this
type of villagers-with-torches response occurs when they do.

Being unable to admit difficulty/failure, they’re unable to work publicly together
or with other institutions and organizations to collectively figure out a way
forward.

Im sure OPM committed all sorts of infosec sins. Im sure they acted
with classically government idiocy in some respects.

But they would have been compromised anyway by the people who
compromised them in order to get the data that was gotten. Just like
everyone else.

If we can stop making things so damn adversarial, maybe we’ll be able
to get together and stop….losing….so badly.

Part 2 (Response to a lot of dialogue):

Thanks for all the thoughtful responses so far. FWIW, I suggest
taking my points in total, as they were meant to be:

1. L* and A* are right, you can protect “the crown jewels” if you
try hard enough. But, that’s really not enough to reduce the
environmental conflict level, so it really is only an intense holding
pattern.

2. While this is possible, everyone is making mistakes anyway – it’s
just a matter of degree of mistakes. In fact, that’s the deep nature
of the problem: It’s too hard to not screw up eventually (even
protecting crown jewels).

3. Some companies make “better” mistakes than others
(Kaspersky/LastPass’s post exploitation activities being a good
example of “better mistakes”), but it’s a matter of degree of mistake
vs a matter of “not doing some  that we
know sustainably works with a sufficiently low error rate”

4. Although the government (or any organization with important data)
should, from a “fairness” perspective, be held to a higher level of
accountability, from a practical standpoint, that’s actually not
*helpful* at this stage – which was the central point of my original
post.  This is because:

5. …even if we hold everyone who needs to be held accountable to
make the “best mistakes” possible, it doesn’t get us where we need to
be ***and*** has the side affect of creating an environment which is
hostile to admission of failure.

6. Without candid admission that “we need a whole new re-think of this
problem space”, we’re going to keep doing the insane – more of the
same and expecting different results. Further investing in infosec as
we know it, or limiting protection to crown jewels, simply delays the
inevitable.

7. The “inevitable” without change is a level of constant hostility
and conflict that will escalate until even protecting the crown jewels
will not be sufficient for people to be able to do business
economically online (or until the profitability/value curve for the
adversaries flattens).

8. So instead of beating up OPM, we should be taking a long hard look
at the very long list of crappy companies and excellent companies who
have been breached and ask ourselves “What’s missing”

9. Because, right now, a list of “InfoSec Best Practices” is a list of
activities that aren’t sustainably working.

I got to give about…half…of this presentation this week near Seattle. Obviously it’s missing a lot without the verbal presentation, but I think it’s a good one and maybe folks can get value out of the deck.

Oh wow…talk about a throwback. I just discovered this video of me at the first ReCon in 2005 talking about IDS and Security Data Visualization Theory and Practice. It’s all still completely valid.  Enjoy!!!

https://archive.org/details/recon-2005-visual-analysis

As I may have mentioned…a lot…in several forums….including my previous post here…I’ll be teaching a cybersecurity framework class this year around the United States.  It will use the NIST Framework and ES-C2M2 as foils, but it won’t be “training” for them.  What it will REALLY be about is using a structured approach to scope out what cybersecurity means from a business perspective and how to apply existing practices and thoughts to actually reducing security risk, instead of just building the same old security program over again and hoping for the increasingly unlikely “best”. Anyway, find dates and sign up at the following link and see the Class Abstract and Outline below. (Also, at the very end, find two of the key custom models I’ll be using): http://www.energysec.org/upcoming-live-events/

Practical Cybersecurity Frameworks Applied to Real World Problems

 OVERVIEW This 2-day class – the first of several throughout the U.S. in 2015 – is intended for those leaders, decisions makers, and technologists who feel that they are lacking a usable bridge between the technology and business aspects of cybersecurity and wish to do more than simply build a standard security program and hope for the best.

A three-part class, students will begin by exploring the theory behind using structured information to create value and the theory behind cybersecurity as a business problem and discipline.

With that theory as a foundation, the class will then use two existing frameworks – the new NIST-Facilitated Cybersecurity Framework and the Department of Energy’s Capability Maturity Model (C2M2) – as foils for discussing how best to build framework bridges between “Security Programs”, “Risk Management”, and “Business Value Management”.

The final day of the class will be used as a facilitated workshop in which the class will either solve “conceptualized” real world problems or, if appropriate, bring their own existing problems to the table to work through.

We hope that students will, at the end, feel they have gained a deeper understanding of cybersecurity and frameworks as they pertain to their own fields than they would have received in more traditional “Training” in products, technologies, and frameworks and will be able to apply these new perspectives to enhance the job they do in the real world.

More than anything else, we hope students will find value in spending two days considering cybersecurity in ways they might not have before.

Students should also be aware that, despite some use of jargon, no technical experience or security expertise is assumed and each class will be tailored to the experience levels of those in attendance wherever possible.

CLASS OUTLINE

  1. WELCOME AND INTRODUCTION
    1. Ice Breaking Exercise
  1. FRAMEWORK THEORY: Structuring Information to Enhance Value
    1. Defining Frameworks
    2. Four Framework Design Principles
      1. Label Awareness: Types of words and meanings
      2. Protocol Stacks: Using Layers to Abstract Common Framings
      3. Model/View/Controller: Humans are Systems, Too
      4. Stages of Value: The Means Can Be As Important as the End
  1. SECURITY THEORY: Creating a Consensus Model
    1. Defining Cybersecurity as a Problem: A Parasitic Model
    2. Scoping Cybersecurity as a Discipline: Contrasting Perspectives
  • COMPARISON #1: VULNERABILITY INTRODUCTION VS. EXPLOITATION
  • COMPARISON #2: QUALITY MANAGEMENT VS. RISK RESPONSE
  • COMPARISON #3: HUMANS VS. TECHNOLOGY
  • COMPARISON #4: STRATEGY VS. TACTICS
  • COMPARISON #5: RISKS FROM VS. RISKS TO (CIA)
  • COMPARISON #6: ENABLEMENT VS. PROTECTION
  • COMPARISON #7: DEFENDING VS. IMPROVING
  • COMPARISON #8: ONE-TIME VS. CONSISTENT BEHAVIOR
  • COMPARISON #9: INCIDENT VS. EXPOSURE MANAGEMENT
  • COMPARISON #10: ERROR VS. DEFAULT HANDLING
  • COMPARISON #11: PERCEPTION VS. FACT
  • COMPARISON #12: EMERGENT VS. PREDICTABLE STATE
  • COMPARISON #13: CYBER VS. PHYSICAL SPACE
  • COMPARISON #14: EFFICACY VS. COMPLIANCE
  1. FURTHER STRUCTURAL CONSIDERATIONS: Helpful Linking Concepts
    1. Common Terms & Parenthetical Comparisons
    2. Kill Chains
    3. Metrics Defined
    4. Control Convergence
    5. Development Lifecycles
    6. “Capabilities” Defined
    7. Risk Management
    8. Others
  1. CONNECTING FRAMEWORK THEORY TO SECURITY THEORY
    1. Demonstrate a <Model> containing elements of both the framework and security discussions to be used as a Reasoning Aid throughout the remainder of the class
    2. Adjust the Model
  2. EVALUATING THE NIST FRAMEWORK AND C2M2
    1. Using the domain models discussed earlier, the class will evaluate the structure and content of both the NIST Framework and the C2M2. We will describe use cases, dependencies, how they can be linked together, and how our own class models can be used to fill the shared gaps in both frameworks. The intent of this section is not to critique other work, but to understand the concepts and work needed to build custom integration approaches and frameworks that will help students more effectively utilize existing work to reduce overall risk in their own environments.
  3. DAY-LONG FACILITATED WORKSHOP
    1. We will scope a theoretically-real security problem, use framework design principles, and eventually (hopefully!) arrive at successful risk reduction approaches over the course of the day. This workshop may flex according to student need and desire.

securityconsiderations2 hackervaluechain2

About Me

Jack Whitsitt

Jack Whitsitt

National Cyber Security. Risk. Multi-Dimensional Rainbows. Maker of conceptual lenses. Artist. Facilitator. Educator. Past/Future Vagabond. Drinks Unicorn Tears.

Follow me on Twitter

My Art / Misc. Photo Stream

snowlakecatjordan - 1

snowlakecatjordan - 2

snowlakecatjordan - 5

snowlakecatjordan - 4

snowlakecatjordan - 3

More Photos
Follow

Get every new post delivered to your Inbox.

Join 50 other followers