You are currently browsing the tag archive for the ‘Business Security Architecture’ tag.
(Needs Editing, but Im a bit stuck…so reviews and comments welcome.)
Years ago – in Mad Magazine – I saw an illustration of Alfred E. Newman sitting in a tire swing. At first glance everything seemed normal, but after a second if reflection I noticed that it was Alfred himself holding up the tire swing while sitting in it – a situation which obviously does not (heh) fly.
This is the image that came to mind when, after my last post on Data Visualization’s Lessons for Enterprise Security, I was asked questions like:
- What does Operational Risk Management have to do with IDS monitoring directly?
- What about better and more transparent auditing? Wouldn’t that help?
- I don’t see how SABSA really applies to MSSP’s or IDS monitoring
My short answer is that no matter how you make your security tire swing or what you do while you’re sitting in it, as a security practice you have to be bolted to something independent that holds it all up. That’s “The Business” in case it’s not clear.
I’d like to address the IDS example, in particular, because I think it is very illustrative of the connection between detailed technical and high level business realities. Please keep in mind that this is only a snapshot of the direct implications to a very small section of a much larger, very holistic process. There are many secondary dependencies and repercussions which I do not address here (like tactical technical responses, incorporating lessons learned, strategic business decision making, etc.)
So, first of all, at a process level, IDS monitoring is pretty simple:
Get data -> Evaluate nature of data -> Evaluate implications of activity represented by data ->Respond to and/or continue getting data
No matter what environment you’re in, if you’re looking at IDS data (or doing any other monitoring, really), you do these four things. If you look a little closer, though, you begin to see that they are (or should be) repeated iteratively. This is because there are really multiple levels – or layers – at which security data can be evaluated (which, incidentally, looks a lot like any other protocol stack). Let’s say, conceptually, that there are five of them:
- Universal Technical Standards: This layer would consist of measuring activity against RFC’s, Protocol Standards, etc. Things that -should- work the same everywhere.
- Environmental Configuration: Here, traffic is evaluated against local configurations that might change from network to network. This includes the configuration of OS’s, Web Servers, Infrastructure Devices, etc.
- Data and Information Control: What happens to data riding on your network and IT obviously falls in the area of concern for IDS analysis.
- Timing and Behavioral Thresholds: Are things happening more frequently than normal? Less frequently? Uptime? User logins? Memory Usage? etc.
- Business Rules: Is the IT actually doing something that directly affects the business? Are manufacturing robots shutting down? Are internal company secrets being sent to competitors? Are you spamming the military?
So what is the intersection between these layers and enterprise business architecture or operational risk management? It looks, initially, like the only direct overlap is in layer 5, right? Not true.
First, each of these layers requires some level of the business context provided by business security architectures to even be effectively evaluated.
- To evaluate security data against potential technical standards, analysts need to know what technologies are in place and deployed and in what manner. Exceptions and outliers are especially important.
- From an environmental perspective, analysts would be well served by knowing the security policies that the configuarations and environment are supporting. E.g., what actions the configurations trying to prevent or support (in terms of the other 5 layers)
- The need to know what data belongs to what data policies and what those policies say is also fundamental. Data policies are tied to conceptual business architecture, which is tied to contextual business assets and requirements.
- System behavior is evaluated in part by knowing things like business schedules and processes. Is payroll being run every 4th Thursday? Are people going to be logging in from all over the world, or just certain locations? Should lab systems pull data from production systems?
- Knowing what business functions are important to keep running, to what thresholds, and how IT systems support those is crucial when trying to understand the big picture and put “events” in terms of “incidents”. Additional, it should be kept in mind that things like “reputation” and “customer satisfaction” are also considered business assets to protect. Organizations have a need to protect those as well.
Secondly, and maybe more importantly, if you actually look at the process flow (below) you find that the analysis process always rolls up to an evaluation at layer 5 (the business rules) of the analysis stack.
From a process flow perspective, there are absolutely no analysis scenarios that do not terminate before completing a layer 5 business analysis (At the bottom of image).
(Click Image for Full Size View)
How does this work?
Analysis begins at one of these five layers – which one is first doesn’t really matter (they are often, in fact, done in parallel). Data is received and is evaluated against the criteria at the layer in question. If there are no exceptions, the same raw data is evaluated against the next layer in the chain. If an exception is found at any one of these layers, the impacts of that exception are then evaluated at all layers. So, for instance, if an analyst notices that there are “funny packets” that aren’t normal TCP/IP traffic while evaluating against “Technical Standards”, he or she then looks to see what the potential technical, environmental, data, behavioral, and business implications are of that traffic. For each of those, the analyst follows the process as if he’d just received new raw data.
This continues to happen until the original data has been run up the entire stack and a final business impact has been determined. Sometimes the path there is short because the answers are known or obvious, or complete data is unavailable to make a determination, or the entire process is followed at a very detailed level. Regardless, the logical process holds true in all cases and there is either a potential business impact or there isn’t.
Read that again: There is either a potential business impact or there isn’t. Without context, IDS monitoring can never be a security function.
The value of IDS monitoring never gets realized if exception events are not tied to business operating requirements and risk appetite (which only business stakeholders can determine). If that linkage is not formally made or that appetite not assessed, IDS monitoring fails. None of the five analysis layers are inherently worth evaluating if a business context for them does not exist and most can’t even be evaluated at all without that context.
What provides this context? Business Security Architecture and Risk Management.
What’s interesting, though, is that these things don’t work when isolated to security, as the original blog post (and others) pointed out. If you limit the scope of your activities to “security”, you end up with the tire swing with no tree. You have to account for and model your entire business formally to achieve security, What this says is that business security architectures are, at a very real level, just business architectures. There is no material difference between the two.
But why would you need a full fledged business-wide process to get this information to you (or your analysts)? Because it’s really hard and expensive to do without the practice and culture in place enterprise-wide. You might brute force it and get your answers once without it, but trying to keep that information up to date would be completely futile.
In closing, I’d like to reiterate that I’ve only discussed business security architecture and operational risk management’s impacts technical security operations (looking up). Of at least as much importance is its role in aiding executive or management decision makers in correctly assessing and responding to risk. This is accomplished by providing a very clear line of sight from the trenches to business assets and risk appetite (looking down).
This is an excerpt from an email I sent to a number of colleagues as we hash out what our various and competing mandates to “secure cyberspace” (in our own domains) actually involve and what we have to do about them. My position is that, first and foremost, we need a business security architecture to answer those questions, and that’s where this following conversation is leading. Various recent discussions I’ve been a part of outside of the job – just in the industry – also were on my mind when I wrote this. There seems to be this belief that “cyber security” is somehow a technical problem, and I couldnt disagree more. (Note: Im speaking generally below and am leaving out detail for the sake of brevity – I dont have time to write a dissertation. :) Note2: The fact that I dont discuss our complete and utter failure to universally define “identity” doesn’t mean I dont believe it significantly impacts our ability to secure “stuff”.)
Security really comes down to the identity of systems and people: Who needs to interact with who, what transactions they can perform between them (and in what direction), how long those transactions may last, how often they may occur, which side of the transaction “owns” the security of the transaction. Everything you do to secure your domain of control flows from this information. It allows us to (literally): place and configure the technical and process security controls which you have described as being a sector need.
If you wanted to describe a “secure” cyber system without this information, your answer necessarily would have to be: “the computer will be in an electromagnetic shielded room with hardware memory that cant be written to, no disk drives, no connection outside the room, and anyone who uses it must be x-rayed and without clothing to hide plastic or wooden tools.” That’s obviously silly, but we just said secure – we didnt define exceptions to “secure”. So how is the middle ground defined? What does “secure” mean to you? I’d suggest that it means that whatever controls are in place enable you to continue to operate in a way that supports your mission. In other words, since security is first defined as: “The system should not do anything more or less than I want it to”, you must define what you want it to do. Then, you go through a prioritization effort of what you want it to do so that your budget can support it.
One of the reasons that developing this business context of “what should my systems do” is important to the budget piece is because that information allows us to begin to create attack trees. Attrack trees help us understand the weak points in the system are *that we care about* (not just every and all possible attack). Few of those weak points are immediately apparent or obvious after an informal inspection, so a process is needed. From attack trees and control placement, we can then prioritize our efforts based on a combination of technical vulnerability, business risk, and available money and work out a budget. Otherwise you’re just making up numbers and -hoping- your investment pays off.
And, as far as using universal standards in this area, your requirements MAY look like someone else’s, but the exceptions to that become maliciously exploitable, so you still need to validate and manage your environment and business requirements.
Without this information defined, your security controls will have significant gaps in placement, they will be not be effectively auditable for malicious activity, their configurations will not accurately reflect the real security needs of the system, and you will most likely run out of a security budget before you’ve mitigated a significant amount of risk. On the macro level, these really are the areas which hackers and malicious actors exploit with consistently rich returns.
Technical vulnerabilities will always exist, and we do need to maintain awareness of them and processes to try and keep up with them, but these are largely well understood and we still fail (and we fail dramatically) to actually prevent intrusions, data exfilitration, denial of service, etc. – even when we have controls in place. It’s not for lack of technology, really. We have controls out the wazoo – firewalls, antivirus, IDS, etc. We’re just not using them rationally.
This business context (or at least the education and tools to develop it) – if you were to look at every company and stakeholder in the sector as being part of the same business (the business of ____) – is what I hope we in the working can begin to help provide. I really believe what might seem like fuzzy talk without a lot of action will ultimately result in a concrete way forward to reduce or mitigate the risk we all face.
(Second Update: As of 9/14/2009, I’m working for Idaho National Laboratory (INL) liaisoning to DHS in DC supporting their ICS-CERT effort. This is reflected in the online resume, but not yet the pdf.)
Just a pinging post since I’ve just (finally) updated my resume on this site and elsewhere to reflect what Im currently doing at TSA. Apparently, IDS analysts in this area are in hot demand, but that’s not really what I do any more. Unfortunately, what I -do- do isn’t as easy to tokenize/categorize as something like that. I do love it, though :) I like…making stuff work better than it did before and do new things. People, in particular.
Here’s a link to the PDF: