You are currently browsing the tag archive for the ‘Cyber’ tag.
I normally don’t have much to say here about my day job (partly why you’ve seen more of a focus on art), but I thought (since I’d been previously linking to the DHS Control Systems Security Program pages) that it was worth mentioning that ICS-CERT has its own website these days: http://ics-cert.org
Take a look at it if you’re in the control systems / SCADA and security/emergency space (particularly with regard, but not limited, to cyber).
Edit/Update: Now that I’m no longer there, I do have a brief take on the subject and a summary of information HERE
This is a repost of my recent comments on SCADASEC with regard to the most recent rush of frantic reports of cyber-espionage and the subsequent pitchfork-waving demands for legislation and/or further immediate regulation.
Ok, so bad stuff is happening. Whether or not we agree on the extent, damage, or origins of attacks against our infrastructure, there’s no disagreement among people in the industry that there is a problem that must be dealt with. So, now that we’re here, let’s all take a breath and look around and assess where we’re at.
First, these intrusions do not seem to represent a substantial change in our tactical situation; these types of intrusions have been occurring in one form or another for years. We may be -detecting- them more frequently before, but that’s it. A nationally significant incident occurring by way of a cyber attack against our critical infrastructure by a serious actor is, by many accounts, just as likely to happen now as it was a few years ago. This is interesting. It has long been observed that “the internet can be taken down in 30 minutes and no one is sure why that hasn’t happened yet.” I imagine that a similar thing can be said about our critical infrastructure.
While I am not suggesting that there is anything but a pressing, critical, national security level issue with the state of our cyber security and CIKR, I am suggesting that it is not so imminent that the value of taking our considered time in fixing the problem should be thrown out in favor of passing rushed, ill-advised legislation or regulation.
Let me elaborate:
The proposed cyber security / critical infrastructure regulation proposals I have seen would absolutely achieve a short term tactical gain in our level of security.
It would do so, though, by committing us to a permanent cyber security arms race at the cost of any hope of a long term strategic win. We would spend all of our money, effort, and cycles, repeatedly reacting to our adversaries’ change in tactics and would provide no method of ultimately getting ahead of them. Eventually, we would have 853 (heh) layers of defense, attackers would still be getting through them all, but we’d be out of any more money to throw at more layers.
This is both because of the nature of the problem as well as the proposed solution. What we have on our hands is a complete architectural failure of our cyber networks with regard to “security”. It is not the lack of some subset of individual security controls. Mandating specific control sets at this point – or any existing in-place “security best practices” – would be akin to insisting that contractors keep building a house on top of a known bad foundation. Incremental improvements will never address that kind of a problem.
What we need (from our technology) but do not have are information-centric systems with end-to-end processing requirements designed into their bones. We skip the hard work of identifying what information we need our systems to produce, what information they need to take in initially, what transformations must be made to the source information, and who can make those transformations in what contexts. We then fail to tightly couple our code, our designs, and our infrastructure to those requirements when we do have them.
We skip it because it seems hard and expensive and the perceived value of speed and the enticements of deferred costs seem to outweigh the risks to the organizations making these decisions. The costs of adding layers and layers and layers of ineffective security afterwords, however, is rarely calculated and compared to just doing it right the first time.
Instead of doing the right thing up front, we end up with tack-on solution sets like NIST 800-53. I don’t know about you all, but I’m pretty sure that if you did everything 800-53 describes – but never did the legwork I just described – security would still fail and it would fail badly. In fact, we see this time and time again in existing federal IT networks. 800-53, by itself, does not work for IT. Why would we legislate it for control systems? I don’t mean to pick on NIST here – it’s one of the better control catalogues out there – but that still doesn’t mean it works.
Technically, we are actually -nowhere near- industry agreement on how to solve the cyber security problem (Did anyone listen to Bruce Potter’s opening Shmoocon remarks? He astutely compared our current cyber security efforts to building a Maginot Line “In-depth”). If that’s true, then legislating something we know will never allow us to achieve a strategic win seems contrary to logic. But, if we want to put our heads in the sand and go the “any incremental gain we can achieve now is worth it even if we’ll have to redesign it from scratch later” route, the idea of legislating security controls for our critical infrastructure is still fatally flawed.
Why? Because a lack of security controls in our national critical infrastructure is not the problem, it is a symptom. Not only is it a symptom, but it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess. Let me say that again: “it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess.”
Those with budget authority – in both private and public organizations – are collectively and consistently making poor operational risk management decisions. They are opting for short term gains at the expense of long term strategic success. From where I sit, I honestly cannot tell whether it’s intentional or simply a lack of visibility into what the actual risks are (which stems from poorly designed organizational architecture). In either case, we have an issue of priorities by people making decisions – and that’s not a technical failure at all.
What happens if we mandate 800-53 or something similar? We create yet another technical compliance regime which, at best, only indirectly affects prioritization of cyber risk. The priority for decisions makers becomes meeting the regulation, not securing their organizations. When this happens, the risk is pushed down to the dedicated people on this list who then have to do the best they can in an environment where their organizations limit their ability to ultimately succeed. When that happens, we also find that good money is repeatedly thrown after bad and security, instead of being a business enabler, becomes a bottomless pit.
We need to find a way, if we think legislation is needed, to directly legislate cyber security as a priority and accountability for failure. If user information is stolen, decision makers need to be held responsible. If control systems are compromised in ways that could result in public harm, decisions makers need to be held responsible. If people suddenly became on the hook for -succeeding-, then one would hope the market and industry would be driven to finding ways to succeed.
It would be nice if education, not legislation, would suffice for this. But what I’ve been hearing on this list and in professional forums seems to indicate that the time for that is almost behind us. So, if we’re going to end up with legislation or regulation, let’s do it slowly, so it goes smoothly, so it’ll work quickly.
If you’ve read some of my recent posts here, you’ll have seen that Im back working on creating data visualization pieces as art. In the process of making these, I was reminded again of the relationship between art and security and its practical implications for enterprise security efforts that literally dictate success of failure. Bear with me as I walk through the art piece first and then arrive at the security observations :)
First, to work, art has to have a solid concept. You might accidentally create a piece that’s appealing on some level if you just throw paint at canvas, but you probably won’t repeat that success often and observers will understand this.
Taking that into the realm of data visualization, you can make all the pretty graphs you like, but unless you do some leg-work ahead of time and massage the data into shape, they’ll be of little use and only may accidentally be visually appealing in a way that let’s you intuitively grok it. (I think this is philosophically similar to some of what Tufte teaches, but I don’t remember for sure.)
For example, if I wanted to (as I did) visually represent the stimulus bill in a meaningful way on screen at once, I could really just use a microscopic font…or turn the whole thing into a jpg and resize it to fit on screen. But what would that accomplish? It would just be mush. We wouldn’t have identified or accounted for inherent structural properties that we needed to keep to preserve order. We also wouldn’t have separated the wheat from the chaff – useless information would hide useful information. And we wouldn’t have manually added linkages between data points that would help us draw meaningful conclusions visually to account for a loss of resolution in individual words.
What would work, instead, is to turn (as I did) the Stimulus Bill into columns of useful information. You could convert the free form english structure of the Bill into a tabular format and add meta data about the text that I wanted to see in the visuals. You could add line numbers, position in sentences, group words by sections of the document and add word counts, etc. All this would show up visually and present a much more useful visualization that would also, because of the new more conscious conceptual structure, be more appealing to look at.
So what does this have to do with security? Everything.
Recently, much has been made of the new SANS CAG control list. Basically, this is a list of “best practice” security measures and controls that, if properly done, will make the most impact in securing organizations. Where’s the problem? The problem is that none of these are new (except WiFi). They’ve all been around longer than I’ve worked in the field (7ish years) and probably much longer than that. Everyone who works in security knows them. Most CTO’s, CIO’s, and CISO’s will probably not be unfamiliar with them. But yet, they’re either not implemented or, more often, they just don’t work.
If these really are best practices (and they are), but yet they’re not working, where’s the disconnect? I think it’s lack of structure. Most organizations do not operate their businesses in a manner that can be secured. There are inherent structural flaws (as in, there isnt any) in the enterprises themselves that conflict with and outright prevent security from happening – just like in art and visualizations. No matter how much effort or money you throw at the problem, cyber/IT/technical security controls will get you nowhere quickly (if anywhere ever) without a properly run and organized business. What failed cyber or IT security really is, ultimately, is a symptom of failed Operational Risk Management.
If you can’t track assests, if you haven’t identified your key data, if you don’t have clear and measurable business objectives for IT and cyber systems, if you don’t have a clear line of sight between the risk of technical failure to business impact, your security controls -will- fail.
Why? Because an organization run without these things will consistently make poor decisions based on incorrect, out of date, or conflicting information. In other words, you have to build break points into the business to be able to check, measure, and change the the organization at key junctures in order to make good risk-based decisions. “Risk-Based decision making” get’s bantered about like “moving forward” and “synergies” – but it’s not an empty phrase and it has real, concrete impacts and prerequisites.
Let’s look at a best-case scenario where everyone wants to do the right thing, but there isn’t an enterprise or business architecture in place. Everyone goes through an evaluation of need and risk, pick the right controls, put them in place. Hunky dory, yeah? Well, what happens when a new line of business is added? Nothing to do with security, right? What if the new line is taking critical data that wasn’t exposed by the other systems and making it public inadvertently? Would you know that? If you need to patch critical systems quickly to prevent a flaw, would you know which ones kept your business running? Would you have documented in an easily accessible manner the fact that your manufacturing systems depended on a feature that the new patch – which works just fine on desktops – disables? Etc. Not to mention that your IDS’s depend on this info, your firewalls, your SEMs, everything. There is relatively little happening on your network that is inherently bad outside of a business context. There are many more (and probably better) examples…but there are two take-home points:
- Everyone with the authority to make changes to your business needs to be aware of the secondary dependencies of those decisions and how they intersect with security and inform others of changes they make
- If you try and do this without managed processes and without maintaing and continuously updating the information about the business in an architecture, you’ll fail. It’s too hard, too expensive, and takes to long to keep doing it from scratch. It’ll never be accurate, timely, relevant, etc.
Business leadership at all levels and in many (most?) organizations simply are making bad decisions that affect security. It’s not that we don’t know, as security professionals, the right things to do. It’s that we can’t express it in terms of business risk and the business leaders typically don’t seem to have the structure built in to affect positive change throughout the organization. Build some good, clean structure with visible break points at critical junctures in your business flow and then security will start to become cheaper, easier, and more effective.
(Second Update: As of 9/14/2009, I’m working for Idaho National Laboratory (INL) liaisoning to DHS in DC supporting their ICS-CERT effort. This is reflected in the online resume, but not yet the pdf.)
Just a pinging post since I’ve just (finally) updated my resume on this site and elsewhere to reflect what Im currently doing at TSA. Apparently, IDS analysts in this area are in hot demand, but that’s not really what I do any more. Unfortunately, what I -do- do isn’t as easy to tokenize/categorize as something like that. I do love it, though :) I like…making stuff work better than it did before and do new things. People, in particular.
Here’s a link to the PDF: