You are currently browsing the tag archive for the ‘Critical Infrastructure’ tag.
(More mature thoughts on RDOSing…)
If you have one error, you fix it and move on.
If you have the same error again, you fix it “better” and move on.
But if you keep having a variety of errors at a steady or increasing rate, you stop looking at the causes of individual errors and look at your basic business practices.
Cyber Security problems are errors. Cyber Security problems are systems or data doing things their owners and society do not with them to do.
Cyber Security errors keep occurring despite being fixed individually.
New types of cyber security errors are occurring over time as new systems are built, as data changes, and as new use cases develop.
By the time we fix our past errors, we’ve created new ones.
Let’s stop focusing national and organizational programs on fixing individual cyber security errors – or even fixing common classes of cyber security errors.
Instead, let’s focus on reducing cyber security error rates in general.
To reduce the rate of cyber security errors, non-cyber specific business practices must be evaluated to determine where cyber security errors are being introduced.
Hmm. This sounds a lot like business management and quality control, not cyber.
Yes, it does.
Tackling individual cyber security errors in our critical infrastructure without reducing error rates will assure failure.
Tackling error rates will create long term, sustainable success by freeing up the vast, unnecessary number of resources we’ve allocated to individual problems to better use through the reduction of the number of errors which have to be dealt with in the first place.
Stop wasting so many resources. :)
So, with what is quite interesting timing, (and thanks, in no small part to Twitter), I just found out a couple of days ago that I’ll be giving a talk at EnergySec This year. The tentative title is: “A Technologist’s Admission of Inadequacy: The executive’s role in National Cyber Security”.
I’d really like to use this opportunity as a platform for some of my concerns, as a technologist, about how we’re treating cyber security as a technical problem – at an operational level, at a strategic business level, and at a legislative level. I’ve touched on these concerns before in this blog, but I’m really excited about the chance to do it in person in front of a lot of other smart people who are actively working cyber security problems.
Thinking out loud, I wrote this earlier:
One of my interests, part of my future role, and with a perspective grounded in building/designing ways to detect badness / working on ICS-CERT, is in combating our habit of defining security in technical terms or on relying on technologists to “fix it”without ever defining what “it” is. A secure system is one that does no more and no less than the people who have ownership and stake in it wish it to do- and that’s a business rule/decision/appetite. As a technologist, if you ask me to secure your systems and let me define what that means, I’ll fail. (ie: There is no “evil” flag in TCP). I’d like to make a plea for organizations to define security through risks to interrelated cross-sector business and social requirements (and associated appetites) before spending so much effort to create technical security plans, standards, controls, laws. An army without a defined mission can be potent just based on size and power, but one that has a mission and defined goals is much, much better.
I’m sure I’ll evolve what I actually want to say between now and September, but that’s where my head is now.
Well, I’ve been waiting awhile to be able to write this (see future post). Finally, I can:
It’s always interesting dealing with the somewhat schizophrenic nature of government messaging. While I understand the constraints, the risks, and the realities of trying to run a free-for-the-private sector service that actually DOES something in the government, it was always a little disheartening to hear (or read) people suggest that the government wasn’t doing anything for some of our cyber security problems, that it didnt have the services available, or “Well, I heard DHS started ICS-CERT, but I think they shut it down?” And, with the media so often just not getting it – and people so often not doing basic research – this happened more frequently than it should. So, now that I’m in the role of customer here (and not on the floor there), I can finally say:
If you’re an asset owner, a vendor, a service provider, a customer, or otherwise a stakeholder in private sector or government critical infrastructure / key resources, you should be aware of CSSP and ICS-CERT (ICS-CERT has been functioning, in its current form, since earlier this year).
To start with: The Control Systems Security Program (CSSP) is an offering out of Homeland Security which:
“attempts to…reduce industrial control system risks within and across all critical infrastructure and key resource sectors by coordinating efforts among federal, state, local, and tribal governments, as well as industrial control systems owners, operators and vendors. The CSSP coordinates activities to reduce the likelihood of success and severity of impact of a cyber attack against critical infrastructure control systems through risk-mitigation activities.”
This includes providing a FREE cyber security assessment tool, onsite assessment visits, and the well-run Industrial Control Systems Joint Working Group (ICSJWG) and its associated conferences. CSSP also provides a variety of free-training in Control Systems Security, both locally in DC as well as, for it’s hands-on Red/Blue Team training, in Idaho Falls.
Then, providing a tactical operational arm to the more strategic CSSP, ICS-CERT is a fully functioning free CERT service for your CIKR organizations. ICS-CERT will, as part of its mission:
- Provide onsite fly-away technical incident response
- Perform digital media analysis on media potentially affected by an incident
- Coordinate the responsible release of vulnerabilities (involving third party researchers, vendors, etc.)
- Provide timely situational awareness
- Coordinate national response, via its seats in the National Cybersecurity Communications and Integration Center (NCCIC), with US-CERT, NCC, Law Enforcement, and other organizations.
All you have to do, basically, is ask. They’ve assisted, during my tenure, quite a few organizations – large and small – and continue to do so.
(Importantly, ICS-CERT has neither a law-enforcement NOR a regulatory function. Their mission is to assist you in defending yourselves and responding to incidents. Your data is, and remains, yours, in any interaction with them. )
And you thought the government doesn’t do anything for cyber security :)
To contact ICS-CERT:
- Call the ICS-CERT Watch Floor: 1-877-776-7585
- Email regarding ICS related cyber activity: firstname.lastname@example.org
Their website is: http://ics-cert.org
So I was sitting in a critical infrastructure cyber security talk earlier this week and had a small revelation. The talk itself wasn’t all that interesting – it was another attempt to collect and identify consensus best practices for critical infrastructure security from a governance point of view – but it still led me down a path that surprised me.
The authors of the paper being presented had done interviews and other research and derived a number of principles required for critical infrastructure cyber security governance based on what they commonly heard over and over. At the talk, we had break-out sessions where they were pinging us for our thoughts on their findings. During the session, I realized that I’d heard it all before (obviously, right? It’s a consensus paper) and was wondering why we couldn’t get past the stale “wisdom” repeated ad nauseam without effect…when it hit me: the use of their paper might be directly opposite of what they might think it is, but it’s still useful!
The thought process is as follows:
- Assumption: We all “agree” that cybersecurity for critical infrastructure is insufficient and we’re missing something.
- Assumption: The paper represented the community opinion, to date, on what needs to happen for good cyber security
- People are trying to improve security, but despite sporadic improvements, we haven’t made nearly as much progress as we think we should. Something is missing.
Conclusion: Whatever it is we need to do …..isn’t in that paper. If we collect a series of best practices and community consensus on a topic where we generally consider ourselves to have failed, collecting that consensus should be used – instead of as a driver of activity – a hint at what won’t, by itself, get us where we need to be. The lists should be considered things to exclude as solutions to our unidentified sticking points, but the solutions themselves.
Starting September 14th, I will no longer be contracting to TSA (via KCG, who have been wonderful). Instead, I will be working for Idaho National Labs (INL) onsite at DHS as a liaison between the smart people exploring the vulnerabilities of our nation’s critical infrastructure and the smart people at DHS CSSP doing the many things that they do.
Before I head out, though, I’d like to comment a little bit on an issue I’ve dealt with at TSA that I think also extrapolates to national cyber security efforts and is in no way unique to a single agency, or even the government. The issue is the label “cyber security”. At TSA, as at DHS, as within the media, as within popular culture, there is confusion as to what “cyber security” means – even at a very high level. The term gets bandied about so loosely that it means everything and nothing. Still, people are making policy based on it without any definition. The amorphous nature of the conversation is going to kick us in the pants sooner rather than later. Can we please nail it down more specifically when we discuss “cyber security”?
Below, find some areas of confusion that I’ve personally run into:
1. The internet, government networks, SCADA/ICS: This one is simple. When we talk about cyber security, we really need to preface our statements with which of these areas we’re discussing. They’re NOT THE SAME and the strategies, ownership, and etc to deal with them are NOT THE SAME either. Over and over again a lack of explicit distinction here burns us.
2. “IT Security” and Technology vs Strategy: Often, in my role, we were lumped in with what IT Security does: “Isn’t that the same thing, only with more computers?” was a popular sentiment. There is the concept that these efforts are technical in nature and that they look a lot like FISMA shops: Assess, Remediate, Certify, etc. against some standard or set of standards. Nothing could be further from the truth. “Cyber security” issues are of a strategic business and programmatic nature. We know how to fix computers, we don’t know how to define what security means to our businesses, how computers affect our operations, and we don’t know our risk appetites. In other words, “cyber security” in an executive (CEO, CFO, COO, CTO, CIO) issue, not one for technologists.
3. Computers vs Infrastructure vs Business Assets: We don’t care in most sectors if our computers work. Really, we don’t. What we care about is that our energy grid keeps pumping out power, our chemicals get mixed right, our cars are manufactured correctly, our financial transactions are accurate, our goods get delivered on time, etc. These are the “assets” we are protecting. We are not protecting the internet, we are not protecting government computer systems. We are protecting the national operational interests of the United States.
4. Think globally, act locally: We’re so used to thinking about single companies and single systems within those companies that we forget that everything we do cooperates to larger goals. Our enterprise systems work together to achieve business goals which must be protected. Our business goals within critical infrastructure sectors, in aggregate, also work together to support national goals. For instance, the thousands of independent companies in “the transportation sectors” all combine to “move people and goods throughout the US and the world on time, to the correct destination, in acceptable condition”. Many decision makers believe that it’s ok to ignore this larger context and focus on single system security or, at best, enterprise security. This is dangerous. Since these systems are interdependent whether we acknowledge it or not, they can be be used to exploit each other and damage our soft assets (goals) if we don’t regular take a look at and secure the larger picture.
This is a repost of my recent comments on SCADASEC with regard to the most recent rush of frantic reports of cyber-espionage and the subsequent pitchfork-waving demands for legislation and/or further immediate regulation.
Ok, so bad stuff is happening. Whether or not we agree on the extent, damage, or origins of attacks against our infrastructure, there’s no disagreement among people in the industry that there is a problem that must be dealt with. So, now that we’re here, let’s all take a breath and look around and assess where we’re at.
First, these intrusions do not seem to represent a substantial change in our tactical situation; these types of intrusions have been occurring in one form or another for years. We may be -detecting- them more frequently before, but that’s it. A nationally significant incident occurring by way of a cyber attack against our critical infrastructure by a serious actor is, by many accounts, just as likely to happen now as it was a few years ago. This is interesting. It has long been observed that “the internet can be taken down in 30 minutes and no one is sure why that hasn’t happened yet.” I imagine that a similar thing can be said about our critical infrastructure.
While I am not suggesting that there is anything but a pressing, critical, national security level issue with the state of our cyber security and CIKR, I am suggesting that it is not so imminent that the value of taking our considered time in fixing the problem should be thrown out in favor of passing rushed, ill-advised legislation or regulation.
Let me elaborate:
The proposed cyber security / critical infrastructure regulation proposals I have seen would absolutely achieve a short term tactical gain in our level of security.
It would do so, though, by committing us to a permanent cyber security arms race at the cost of any hope of a long term strategic win. We would spend all of our money, effort, and cycles, repeatedly reacting to our adversaries’ change in tactics and would provide no method of ultimately getting ahead of them. Eventually, we would have 853 (heh) layers of defense, attackers would still be getting through them all, but we’d be out of any more money to throw at more layers.
This is both because of the nature of the problem as well as the proposed solution. What we have on our hands is a complete architectural failure of our cyber networks with regard to “security”. It is not the lack of some subset of individual security controls. Mandating specific control sets at this point – or any existing in-place “security best practices” – would be akin to insisting that contractors keep building a house on top of a known bad foundation. Incremental improvements will never address that kind of a problem.
What we need (from our technology) but do not have are information-centric systems with end-to-end processing requirements designed into their bones. We skip the hard work of identifying what information we need our systems to produce, what information they need to take in initially, what transformations must be made to the source information, and who can make those transformations in what contexts. We then fail to tightly couple our code, our designs, and our infrastructure to those requirements when we do have them.
We skip it because it seems hard and expensive and the perceived value of speed and the enticements of deferred costs seem to outweigh the risks to the organizations making these decisions. The costs of adding layers and layers and layers of ineffective security afterwords, however, is rarely calculated and compared to just doing it right the first time.
Instead of doing the right thing up front, we end up with tack-on solution sets like NIST 800-53. I don’t know about you all, but I’m pretty sure that if you did everything 800-53 describes – but never did the legwork I just described – security would still fail and it would fail badly. In fact, we see this time and time again in existing federal IT networks. 800-53, by itself, does not work for IT. Why would we legislate it for control systems? I don’t mean to pick on NIST here – it’s one of the better control catalogues out there – but that still doesn’t mean it works.
Technically, we are actually -nowhere near- industry agreement on how to solve the cyber security problem (Did anyone listen to Bruce Potter’s opening Shmoocon remarks? He astutely compared our current cyber security efforts to building a Maginot Line “In-depth”). If that’s true, then legislating something we know will never allow us to achieve a strategic win seems contrary to logic. But, if we want to put our heads in the sand and go the “any incremental gain we can achieve now is worth it even if we’ll have to redesign it from scratch later” route, the idea of legislating security controls for our critical infrastructure is still fatally flawed.
Why? Because a lack of security controls in our national critical infrastructure is not the problem, it is a symptom. Not only is it a symptom, but it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess. Let me say that again: “it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess.”
Those with budget authority – in both private and public organizations – are collectively and consistently making poor operational risk management decisions. They are opting for short term gains at the expense of long term strategic success. From where I sit, I honestly cannot tell whether it’s intentional or simply a lack of visibility into what the actual risks are (which stems from poorly designed organizational architecture). In either case, we have an issue of priorities by people making decisions – and that’s not a technical failure at all.
What happens if we mandate 800-53 or something similar? We create yet another technical compliance regime which, at best, only indirectly affects prioritization of cyber risk. The priority for decisions makers becomes meeting the regulation, not securing their organizations. When this happens, the risk is pushed down to the dedicated people on this list who then have to do the best they can in an environment where their organizations limit their ability to ultimately succeed. When that happens, we also find that good money is repeatedly thrown after bad and security, instead of being a business enabler, becomes a bottomless pit.
We need to find a way, if we think legislation is needed, to directly legislate cyber security as a priority and accountability for failure. If user information is stolen, decision makers need to be held responsible. If control systems are compromised in ways that could result in public harm, decisions makers need to be held responsible. If people suddenly became on the hook for -succeeding-, then one would hope the market and industry would be driven to finding ways to succeed.
It would be nice if education, not legislation, would suffice for this. But what I’ve been hearing on this list and in professional forums seems to indicate that the time for that is almost behind us. So, if we’re going to end up with legislation or regulation, let’s do it slowly, so it goes smoothly, so it’ll work quickly.