You are currently browsing the tag archive for the ‘Legislation’ tag.
Someone today asked me about CISA. The truth is, I’ve stopped paying attention. Everyone, just shut up and pass something so we can move on. But, I do have perspective that might be relevant: I’ve spent the past 12 years in infosec, including doing threat analysis, have spend the past 8-ish years in Critical Infrastructure, have been a government operational incident responder to the private sector with access to super secret info sauce, have helped build a strategic government pubic/private partnership program, worked with a number of ISACS, and have worked in a non-profit ISAO-like environment. Here’s what I think:
A long time ago, in a galaxy far too close to here, a bunch of techies, not in sufficient control of the business and other environmental factors to influence the cybersecurity exposure business was creating or suffering from, said: “We need better, actionable information to succeed!”. This was both sexy-tech driven and a last-resort. If the business was leaving the doors and windows open, the “defenders” (heh) needed to know as much about their adversary as they could.
At the same time, businesses, finding they were becoming more and more on the hook for serious adversary conflict (as opposed to automated worms) tried to offload their responsibilities to the government. Lack of “Information Sharing” was a really convenient roadblock to partnership. “Hey, look, gov, we’d really like to help, but you’ve got all this awesome intel that you won’t share, how can WE do anything? YOU should!”.
Government, having its own interests, was also looking for more data because, essentially, most of theirs was limited or sucked or wasn’t useable. At the end of the day, cyber conflict is occurring on private infrastructure – the government infrastructure either being tangential to the discussion at hand, handled internally, or a peer infrastructure the private critical infrastructure (i.e: The internet is the internet is the internet and its all a common geography of conflict). So they said (and, for what it’s worth, largely truthfully): “We can’t send you information if you don’t send US information! How can we know what’s actionable for you?” The fact that they might have their own uses for the information was tangential to this roadblock/truth.
This was *exactly* what industry hoped would happen! Industry, having done this in the past with other non-cyber information sharing, knew this would stymie everyone for awhile: Competitive disadvantages, risk of prosecution for what they shared, inability of government to release classified information effectively, and the biggie – risk of regulation!
So at this point, we had:
Techies going: “Mmmm..Info Share! Sexy! We want more info! Wait, actually reduce exposure? That’s no fun, and besides, that’s really out of our control – business people suck at making decisions”
Industry: “Sweeeeet. This techie cry for Info Sharing is cool! It’s something that looks like low hanging fruit that we can use to block cyber interaction with the government indefinitely”
Gov: “Hmm. Cyber is scary and we have little to no visibility and we’re on the hook to help without (for the most part) regulation, we need information to better conduct conflict and apply game theory to international relations! We need to get industry to trust us and give us all their bits!”
Given the long history of the government ROYALLY screwing up trust relations with industry, this stood for years as a happy-medium-quagmire with everyone taking pot shots at each other from across entrenched positions.
But wait! Suddenly it actually got serious – the MEDIA started running away with cyber? Can those Chinese kids take out the power grid? OMG! (Note: I actually think the risks from cyber conflict are potentially VERY severe, but these are not the SAME risks as the ones Media got hold of). And suddenly, congress, who KNOWS where it’s risks come from – bad political coverage by the media forcing uneducated people to vote or clamor for some MEME-OF-THE-DAY – got involved.
Congress: “Gov, Industry, Techies? What do we need to do CYBER better?!!?!?”
All: “Informaaaatttiion Shaaaarrrriiinnnnggg…”
And now, Congress has it, and everyone has lost COMPLETE sight of the fact that, at best, information sharing is a MATURE and DIFFICULT capability that results from mature organizational awareness and decision making and will, again at best, help catch the EXCEPTIONS that are not handled by mature organizational decision making, and will do little to NOTHING to reduce cybersecurity risk exposure or to reduce the escalating cost and complexity of the problem over the time. Instead, it will help better execute/conduct conflict in cyberspace, satisfy techies who want to play more complicated games and solve more interesting problems, and leave the governments involved without any real position change in their ability to apply game theory strategically to cyberspace.
(NOTE ABOUT THE BELOW: This post was more about the history of information sharing driving these types of bills. My comments below are much less informed)
Does CISA trample on rights and privacy? Maaaaayyybee – Probably not…this is an old discussion that wasn’t completely initiated by government. It may have secondary cascading effects, but I don’t believe that’s the primary motivation for it (or even A motivation).
Do I want them to pass it? Well, the government has shown it is PERFECTLY WILLING to try and get this information by other means, so….are we really losing anything? If nothing else, if we pass AN information sharing bill, at least there’s an increased possibility everyone will be able to finally share the Information that the Info Sharing Emperor Has No Clothes?
With all the blah blah blah going on about CISPA, I’ve managed to keep my mouth shut about it for awhile, but it turns out I do have something to contribute to the dialogue (or, I think I do :) ).
I’m not going to review the language of the bill – I’m sure it’s terrible. Most cyber legislation is. It can’t not be. They all go too far, lack clarity of language, introduce unforeseen escalations of government rights, etc.
There’s no need to go over the givens. :)
So, then, what? Well, after I finally read CISPA and the surrounding reporting, what I noticed was that very few people seem to understand that the bill didn’t come out of nowhere. The language in it, the motivations behind it, the structure of the bill, etc…all of it… completely reflects the information sharing discussion that’s been going on between those engaged in public/private partnership cyber security activities for years. It’s not just a random congressional fart. Anyone who has been part of that discussion should recognize the bill as an old …if not friend…sparring partner.
For those who don’t know, there is, in this space, an institutionalized gridlock in the debate about information sharing. CISPA clearly is an attempt to remedy this very, very specific gridlock. It’s not a general cyber security bill. It’s not even a general information sharing bill.. It is designed to address the perspective that the government has information it won’t share, that clearances have been roadblocks, and that legal ambiguities have prevented sharing.
Now, while I happen to think that some of these are in fact roadblocks, I also know CISPA doesn’t touch the heart of what the most severe and core information sharing problems are. But, unfortunately, I’m in the minority. A great number of otherwise intelligent people do believe in what it’s trying to accomplish, typically terrible language notwithstanding.
Maybe no one else finds this worth noting, but I at least thought it was unusual that the structure of the existing conversation is so clearly reflected in a piece of legislation…
This is a repost of my recent comments on SCADASEC with regard to the most recent rush of frantic reports of cyber-espionage and the subsequent pitchfork-waving demands for legislation and/or further immediate regulation.
Ok, so bad stuff is happening. Whether or not we agree on the extent, damage, or origins of attacks against our infrastructure, there’s no disagreement among people in the industry that there is a problem that must be dealt with. So, now that we’re here, let’s all take a breath and look around and assess where we’re at.
First, these intrusions do not seem to represent a substantial change in our tactical situation; these types of intrusions have been occurring in one form or another for years. We may be -detecting- them more frequently before, but that’s it. A nationally significant incident occurring by way of a cyber attack against our critical infrastructure by a serious actor is, by many accounts, just as likely to happen now as it was a few years ago. This is interesting. It has long been observed that “the internet can be taken down in 30 minutes and no one is sure why that hasn’t happened yet.” I imagine that a similar thing can be said about our critical infrastructure.
While I am not suggesting that there is anything but a pressing, critical, national security level issue with the state of our cyber security and CIKR, I am suggesting that it is not so imminent that the value of taking our considered time in fixing the problem should be thrown out in favor of passing rushed, ill-advised legislation or regulation.
Let me elaborate:
The proposed cyber security / critical infrastructure regulation proposals I have seen would absolutely achieve a short term tactical gain in our level of security.
It would do so, though, by committing us to a permanent cyber security arms race at the cost of any hope of a long term strategic win. We would spend all of our money, effort, and cycles, repeatedly reacting to our adversaries’ change in tactics and would provide no method of ultimately getting ahead of them. Eventually, we would have 853 (heh) layers of defense, attackers would still be getting through them all, but we’d be out of any more money to throw at more layers.
This is both because of the nature of the problem as well as the proposed solution. What we have on our hands is a complete architectural failure of our cyber networks with regard to “security”. It is not the lack of some subset of individual security controls. Mandating specific control sets at this point – or any existing in-place “security best practices” – would be akin to insisting that contractors keep building a house on top of a known bad foundation. Incremental improvements will never address that kind of a problem.
What we need (from our technology) but do not have are information-centric systems with end-to-end processing requirements designed into their bones. We skip the hard work of identifying what information we need our systems to produce, what information they need to take in initially, what transformations must be made to the source information, and who can make those transformations in what contexts. We then fail to tightly couple our code, our designs, and our infrastructure to those requirements when we do have them.
We skip it because it seems hard and expensive and the perceived value of speed and the enticements of deferred costs seem to outweigh the risks to the organizations making these decisions. The costs of adding layers and layers and layers of ineffective security afterwords, however, is rarely calculated and compared to just doing it right the first time.
Instead of doing the right thing up front, we end up with tack-on solution sets like NIST 800-53. I don’t know about you all, but I’m pretty sure that if you did everything 800-53 describes – but never did the legwork I just described – security would still fail and it would fail badly. In fact, we see this time and time again in existing federal IT networks. 800-53, by itself, does not work for IT. Why would we legislate it for control systems? I don’t mean to pick on NIST here – it’s one of the better control catalogues out there – but that still doesn’t mean it works.
Technically, we are actually -nowhere near- industry agreement on how to solve the cyber security problem (Did anyone listen to Bruce Potter’s opening Shmoocon remarks? He astutely compared our current cyber security efforts to building a Maginot Line “In-depth”). If that’s true, then legislating something we know will never allow us to achieve a strategic win seems contrary to logic. But, if we want to put our heads in the sand and go the “any incremental gain we can achieve now is worth it even if we’ll have to redesign it from scratch later” route, the idea of legislating security controls for our critical infrastructure is still fatally flawed.
Why? Because a lack of security controls in our national critical infrastructure is not the problem, it is a symptom. Not only is it a symptom, but it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess. Let me say that again: “it’s a symptom of exactly the same problems that led to Wall Street’s collapse and the atrocious mortgage mess.”
Those with budget authority – in both private and public organizations – are collectively and consistently making poor operational risk management decisions. They are opting for short term gains at the expense of long term strategic success. From where I sit, I honestly cannot tell whether it’s intentional or simply a lack of visibility into what the actual risks are (which stems from poorly designed organizational architecture). In either case, we have an issue of priorities by people making decisions – and that’s not a technical failure at all.
What happens if we mandate 800-53 or something similar? We create yet another technical compliance regime which, at best, only indirectly affects prioritization of cyber risk. The priority for decisions makers becomes meeting the regulation, not securing their organizations. When this happens, the risk is pushed down to the dedicated people on this list who then have to do the best they can in an environment where their organizations limit their ability to ultimately succeed. When that happens, we also find that good money is repeatedly thrown after bad and security, instead of being a business enabler, becomes a bottomless pit.
We need to find a way, if we think legislation is needed, to directly legislate cyber security as a priority and accountability for failure. If user information is stolen, decision makers need to be held responsible. If control systems are compromised in ways that could result in public harm, decisions makers need to be held responsible. If people suddenly became on the hook for -succeeding-, then one would hope the market and industry would be driven to finding ways to succeed.
It would be nice if education, not legislation, would suffice for this. But what I’ve been hearing on this list and in professional forums seems to indicate that the time for that is almost behind us. So, if we’re going to end up with legislation or regulation, let’s do it slowly, so it goes smoothly, so it’ll work quickly.