You are currently browsing the tag archive for the ‘Business Security Architecture’ tag.

So, I’ve recently written up two separate pieces talking about Business Security, Frameworks, Cybersecurity.  One is for <UNDISCLOSED>, the other is for CForum (hey, I was the next highlighted blog post after Ron Gula’s!).  Let me know what you think of them, they’re below.  (Also, two posts more directly about the new NIST Framework and the DHS Voluntary Program are HERE and HERE)

SCOPING FRAMEWORK USE

Cybersecurity is, broadly, the enablement of an environment in which business objectives are sustainably achievable in the face of continuous risk resulting from the use of cyber systems.

The risks from using cyber systems usually take the form of actors desiring to use those systems as a means of repurposing business value chains to alter the value produced, inhibit the value produced, or producing new value in support of actor objectives.

Managing these risks involves two focus areas:

  1. Creating a business environment which limits the window of opportunity provided to actors in which to achieve their objectives
  2. Executing a security-specific program that is able to identify, mitigate, and respond to actor activity which is occurs within the remaining window of opportunity.

Leaving the business environment unmanaged provides a large continuous window of opportunity which is, at best, not cost effective for security-specific programs to effectively respond to.  At worst, the window of opportunity created by an unmanaged business environment leaves the window of opportunity too wide for security-specific programs to protect, even with excessive financial investment.

On the other hand, no organization can manage its business environment and reduces actor opportunities sufficiently to remove the need for security-specific programs.

Both focus areas must be addressed for sustainable, effective, cost-limited cybersecurity and the NIST Framework can help with both.

Cybersecurity Frameworks, generically may provide business value to an organization in three ways:

  1. Scope and Completeness Assessment
  2. Coverage Validation
  3. Efficacy Testing

In the case of security-specific programs, the NIST Framework can be used directly as a positive model for determining program scope and completeness, it can (using the tier model) be augmented with additional information to assist with security program coverage validation, and it can play a role within a larger model in testing efficacy of security program efforts.

Within the business environment focus area, the NIST Framework can also play a supporting role as a negative model to help determine areas which must be better controlled by the business before security-specific programs can effectively manage residual cybersecurity risk flowing down from that environment.

While XYZ is focusing specifically on the former, security-program specific, focus area, its resulting efforts can, with forethought, go a long way to providing a foundation for the less-well explored area of cybersecurity risk reduction through business environment management and lead us toward the kind of comprehensive cybersecurity risk management approaches that will, over time, reduce our risk across organizations, the sector, and the nation sustainably, cost effectively, and independent of increases in complexity and changes in actor behavior.

FRAMING THE FUTURE (From CForum)

What’s next? 

This is a question on all of our minds – not just for the Framework but also cybersecurity more generally.

Executives have started to get on board, the press is paying attention, manufacturers are starting to include security in their ICS products, grass roots organizations such as I Am The Cavalry and others are forming to help to move Automotive and Medical Device security forward, the White House has issued the Executive Order, Congressional staff discusses cybersecurity regularly, and together we have created a common practice consensus “flag” with the NIST Framework, and this very forum now exists to help us collaborate more effectively.

So, how do we use this momentum to continue to move forward coherently toward sustained risk reduction?

I’ve heard a lot of good ideas here, at the 6th NIST workshop, and in many other venues about what to do next, but a lot of these ideas, thrown up into the air, fall down with no structure to catch them. There is no bigger picture into which to slot next step ideas and see how they relate to past work, need, and each other.

Without such a common reference structure, making progress from here on out will be increasingly difficult and I believe we need to learn from the very recently successful past and build a framework to do so.

The new framework I’m envisioning would, far from a “2.0” of what we’ve already built, have a completely different goal. Instead of collecting and organizing common solution elements into a document, this framework would identify the types of problemswe face doing business in a hostile, ICT (Internet and Communication Technology) enabled world and provide a context in which to organize the existing NIST Framework solutions.

In other words, if we identify a common language and reference for the “cybersecurity problem space” – especially the areas outside of the CISO organization – it should be much easier to go back, find out where the Framework excels, where it needs help, and where it simply does not apply and, from there, allow us to organize future efforts effectively and sustainably.

Maybe we should have done this earlier, but maybe it took creating a Common Practice Framework to highlight the need to go back and create a “Problem Space Framework”. How many of us have looked at strategy documents that said things like “Will reduce cyber attacks” or “Improve Cybersecurity” and thought “But wait, what does that mean?” Shouldn’t there be goals, or non-security objectives for security to help frame, limit, and shape our efforts to some productive end?

When the executive order came out and I heard about how the NIST Framework was going to be used to support “Performance Objectives”, I thought, “Great! Finally, we’re going to have the electrical current that non-security-activity goals provide to security activities to drive them to defined, implementable, and effective ends”.

Unfortunately, that doesn’t seem to be happening and there doesn’t seem to be consensus that that was even the original intent. But that doesn’t mean we don’t still need to create that organizing current around security activities.

The “Tier” concept in the existing framework, as incomplete as it is, definitely speaks to the need for the application of a maturity model to what we’re doing, but even maturity models need to exist inside a larger context of “Why?” that is framed by all of the ways organizations – and those who work for them – introduce risk. If we don’t have a framework for risk introduction in a broad business and national context, how will we ever be able to tell ourselves, each other, our customers, or anyone else that we’ve applied the NIST Framework in some legitimately effective or helpful way?

This shouldn’t be a hard problem to solve. As with the Common Practices in the NIST Framework, we’re in a situation where a lot of different people have very different but valid views into the cybersecurity problem space. The material and knowledge exists, we just need to gather it, write it down, gain consensus, and begin to apply it.

From my own point of view, I think this begins by identifying (and documenting) how the major, common roles within organizations (and of organizations) introduce cybersecurity risk through legitimate, authorized means in the course of doing business. If we can nail this down across the entire business value chain – from Boards and CEO’s to CFO’s to Operations Managers to IT to Procurement to Sales and Marketing to HR to Industry Partners to Insurance Companies to Regulators all the way to the CISO shops that the NIST Framework already assumes solutions for – we will have a much better understanding of what we’re solving for. This is because our cybersecurity risk profiles are, when it comes down to real root causes, exclusively the result of the series of decisions made by people in legitimate, authorized capacities. Whether or not the decisions are in your sphere of influence, knowing how they are influencing your cybersecurity risk profile over time is the first step in determining how to most effectively apply the controls from the existing NIST Framework. From there, that knowledge can be applied to contextualizing the maturity levels in models like the ES-C2M2 in a way that provides “Management Metrics” to those responsible for managing organizational behavior, and those maturity levels can then guide the scope, goals, metrics, and placement of those controls that exist in the NIST Framework.

Beyond the tactical benefits of the knowledge such a framework would give us, our ability to act strategically will improve. If we know how our CEOs and those who work for them are introducing risk, if we can find commonalities across organizations, then we can describe the goals, effectiveness, and mitigating controls in terms that are much less dependent on far too rapidly changing technology and external threat actors. This would provide a much more stable platform over time from which to begin doing sustainably successful risk management, maturity modeling, and NIST Framework implementation and adoption.

That said, this is just one way we might go about creating a “Problem Space Framework” – there are others. Regardless of which one we choose, I strongly believe building one will clarify, speed up, and make our way forward much more effective at reducing risks created by the use and operation of ICT’s.

(Needs Editing, but Im a bit stuck…so reviews and comments welcome.)

Years ago – in Mad Magazine – I saw an illustration of Alfred E. Newman sitting in a tire swing. At first glance everything seemed normal, but after a second if reflection I noticed that it was Alfred himself holding up the tire swing while sitting in it – a situation which obviously does not (heh) fly.

This is the image that came to mind when, after my last post on Data Visualization’s Lessons for Enterprise Security, I was asked questions like:

  • What does Operational Risk Management have to do with IDS monitoring directly?
  • What about better and more transparent auditing? Wouldn’t that help?
  • I don’t see how SABSA really applies to MSSP’s or IDS monitoring

My short answer is that no matter how you make your security tire swing or what you do while you’re sitting in it, as a security practice you have to be bolted to something independent that holds it all up. That’s “The Business” in case it’s not clear.

I’d like to address the IDS example, in particular, because I think it is very illustrative of the connection between detailed technical and high level business realities.  Please keep in mind that this is only a snapshot of the direct implications to a very small section of a much larger, very holistic process.  There are many secondary dependencies and repercussions which I do not address here (like tactical technical responses, incorporating lessons learned, strategic business decision making, etc.)

So, first of all, at a process level, IDS monitoring is pretty simple:

Get data -> Evaluate nature of data -> Evaluate implications of activity represented by data ->Respond to and/or continue getting data

No matter what environment you’re in, if you’re looking at IDS data (or doing any other monitoring, really), you do these four things. If you look a little closer, though, you begin to see that they are (or should be) repeated iteratively. This is because there are really multiple levels – or layers – at which security data can be evaluated (which, incidentally, looks a lot like any other protocol stack). Let’s say, conceptually, that there are five of them:

  1. Universal Technical Standards: This layer would consist of measuring activity against RFC’s, Protocol Standards, etc. Things that -should- work the same everywhere.
  2. Environmental Configuration: Here, traffic is evaluated against local configurations that might change from network to network. This includes the configuration of OS’s, Web Servers, Infrastructure Devices, etc.
  3. Data and Information Control: What happens to data riding on your network and IT obviously falls in the area of concern for IDS analysis.
  4. Timing and Behavioral Thresholds: Are things happening more frequently than normal? Less frequently? Uptime? User logins? Memory Usage? etc.
  5. Business Rules: Is the IT actually doing something that directly affects the business? Are manufacturing robots shutting down? Are internal company secrets being sent to competitors? Are you spamming the military?

So what is the intersection between these layers and enterprise business architecture or operational risk management? It looks, initially, like the only direct overlap is in layer 5, right? Not true.

First, each of these layers requires some level of the business context provided by business security architectures to even be effectively evaluated.

For example:

  • To evaluate security data against potential technical standards, analysts need to know what technologies are in place and deployed and in what manner. Exceptions and outliers are especially important.
  • From an environmental perspective, analysts would be well served by knowing the security policies that the configuarations and environment are supporting. E.g., what actions the configurations trying to prevent  or support (in terms of the other 5 layers)
  • The need to know what data belongs to what data policies and what those policies say is also fundamental.  Data policies are tied to conceptual business architecture, which is tied to contextual business assets and requirements.
  • System behavior is evaluated in part by knowing things like business schedules and processes. Is payroll being run every 4th Thursday? Are people going to be logging in from all over the world, or just certain locations? Should lab systems pull data from production systems?
  • Knowing what business functions are important to keep running, to what thresholds, and how IT systems support those is crucial when trying to understand the big picture and put “events” in terms of “incidents”.  Additional, it should be kept in mind that things like “reputation” and “customer satisfaction” are also considered business assets to protect.  Organizations have a need to protect those as well.

Secondly, and maybe more importantly, if you actually look at the process flow (below) you find that the analysis process always rolls up to an evaluation at layer 5 (the business rules) of the analysis stack.

From a process flow perspective, there are absolutely no analysis scenarios that do not terminate before completing a layer 5 business analysis (At the bottom of image).


(Click Image for Full Size View)

idsentseccolorlines

How does this work?

Analysis begins at one of these five layers – which one is first doesn’t really matter (they are often, in fact, done in parallel). Data is received and is evaluated against the criteria at the layer in question. If there are no exceptions, the same raw data is evaluated against the next layer in the chain. If an exception is found at any one of these layers, the impacts of that exception are then evaluated at all layers. So, for instance, if an analyst notices that there are “funny packets” that aren’t normal TCP/IP traffic while evaluating against “Technical Standards”, he or she then looks to see what the potential technical, environmental, data, behavioral, and business implications are of that traffic. For each of those, the analyst follows the process as if he’d just received new raw data.

This continues to happen until the original data has been run up the entire stack and a final business impact has been determined. Sometimes the path there is short because the answers are known or obvious, or complete data is unavailable to make a determination, or the entire process is followed at a very detailed level. Regardless, the logical process holds true in all cases and there is either a potential business impact or there isn’t.

Read that again: There is either a potential business impact or there isn’t. Without context, IDS monitoring can never be a security function.

The value of IDS monitoring never gets realized if exception events are not tied to business operating requirements and risk appetite (which only business stakeholders can determine). If that linkage is not formally made or that appetite not assessed, IDS monitoring fails. None of the five analysis layers are inherently worth evaluating if a business context for them does not exist and most can’t even be evaluated at all without that context.

What provides this context? Business Security Architecture and Risk Management.

What’s interesting, though, is that these things don’t work when isolated to security, as the original blog post (and others) pointed out. If you limit the scope of your activities to “security”, you end up with the tire swing with no tree. You have to account for and model your entire business formally to achieve security, What this says is that business security architectures are, at a very real level, just business architectures. There is no material difference between the two.

But why would you need a full fledged business-wide process to get this information to you (or your analysts)? Because it’s really hard and expensive to do without the practice and culture in place enterprise-wide. You might brute force it and get your answers once without it, but trying to keep that information up to date would be completely futile.


In closing, I’d like to reiterate that I’ve only discussed business security architecture and operational risk management’s impacts technical security operations (looking up). Of at least as much importance is its role in aiding executive or management decision makers in correctly assessing and responding to risk. This is accomplished by providing a very clear line of sight from the trenches to business assets and risk appetite (looking down).

This is an excerpt from an email I sent to a number of colleagues as we hash out what our various and competing mandates to “secure cyberspace” (in our own domains) actually involve and what we have to do about them.  My position is that, first and foremost, we need a business security architecture to answer those questions, and that’s where this following conversation is leading.  Various recent discussions I’ve been a part of outside of the job – just in the industry – also were on my mind when I wrote this.  There seems to be this belief that “cyber security” is somehow a technical problem, and I couldnt disagree more. (Note: Im speaking generally below and am leaving out detail for the sake of brevity – I dont have time to write a dissertation. :) Note2: The fact that I dont discuss our complete and utter failure to universally define “identity” doesn’t mean I dont believe it significantly impacts our ability to secure “stuff”.)

Security really comes down to the identity of systems and people: Who needs to interact with who, what transactions they can perform between them (and in what direction), how long those transactions may last, how often they may occur, which side of the transaction “owns” the security of the transaction.  Everything you do to secure your domain of control flows from this information.  It allows us to (literally):  place and configure the technical and process security controls which you have described as being a sector need.

If you wanted to describe a “secure” cyber system without this information, your answer necessarily would have to be: “the computer will be in an electromagnetic shielded room with hardware memory that cant be written to, no disk drives, no connection  outside the room, and anyone who uses it must be x-rayed and without clothing to hide plastic or wooden tools.”  That’s obviously silly, but we just said secure – we didnt define exceptions to “secure”.  So how is the middle ground defined?  What does “secure” mean to you? I’d suggest that it means that whatever controls are in place enable you to continue to operate in a way that supports your mission.  In other words, since security is first defined as: “The system should not do anything more or less than I want it to”, you must define what you want it to do. Then, you go through a prioritization effort of what you want it to do so that your budget can support it.

One of the reasons that developing this business context of “what should my systems do” is important to the budget piece is because that information allows us to begin to create attack trees. Attrack trees help us understand the weak points in the system are *that we care about* (not just every and all possible attack). Few of those weak points are immediately apparent or obvious after an informal inspection, so a process is needed.   From attack trees and control placement, we can then prioritize our efforts based on a combination of technical vulnerability, business risk, and available money and work out a budget.  Otherwise you’re just making up numbers and -hoping- your investment pays off.

And, as far as using universal standards in this area, your requirements MAY look like someone else’s, but the exceptions to that become maliciously exploitable, so you still need to validate and manage your environment and business requirements.

Without this information defined, your security controls will have significant gaps in placement, they will be not be effectively auditable for malicious activity, their configurations will not accurately reflect the real security needs of the system, and you will most likely run out of a security budget before you’ve mitigated a significant amount of risk. On the macro level, these really are the areas which hackers and malicious actors exploit with consistently rich returns.

Technical vulnerabilities will always exist, and we do need to maintain awareness of them and processes to try and keep up with them, but these are largely well understood and we still fail (and we fail dramatically) to actually prevent intrusions, data exfilitration, denial of service, etc. – even when we have controls in place.  It’s not for lack of technology, really. We have controls out the wazoo – firewalls, antivirus, IDS, etc.  We’re just not using them rationally.

This business context (or at least the education and tools to develop it) – if you were to look at every company and stakeholder in the sector as being part of the same business (the business of ____) –  is what I hope we in the working can begin to help provide. I really believe what might seem like fuzzy talk without a lot of action will ultimately result in a concrete way forward to reduce or mitigate the risk we all face.

(Second Update: As of 9/14/2009, I’m working for Idaho National Laboratory (INL) liaisoning to DHS in DC supporting their ICS-CERT effort. This is reflected in the online resume, but not yet the pdf.)

Just a pinging post since I’ve just (finally) updated my resume on this site and elsewhere to reflect what Im currently doing at TSA.  Apparently, IDS analysts in this area are in hot demand, but that’s not really what I do any more.  Unfortunately, what I -do- do isn’t as easy to tokenize/categorize as something like that. I do love it, though :) I like…making stuff work better than it did before and do new things.  People, in particular.

Here’s a link to the PDF:

http://jackwhitsitt.com/whitsittresume02092009b.pdf

And online:

http://sintixerr.wordpress.com/jack-whitsitts-technical-and-security-resume/

About Me

Jack Whitsitt

Jack Whitsitt

National Cyber Security. Risk. Multi-Dimensional Rainbows. Maker of conceptual lenses. Artist. Facilitator. Educator. Past/Future Vagabond. Drinks Unicorn Tears.

Follow me on Twitter

My Art / Misc. Photo Stream

pleasemom

xaphancolor

DSC00634

DSC00623

DSC00613

More Photos
Follow

Get every new post delivered to your Inbox.

Join 36 other followers