You are currently browsing the tag archive for the ‘Executive Order’ tag.
So, I said I would do a “Risk Analysis” of Trump’s executive order using an actual risk framework to prove the order was dumb and where/why/how it was dumb. Here it is! I used FAIR! (Note: I used a single data point here, because that’s all this analysis needed – I just go through control relationships and impact types mostly – but despite the snarky tone, the analysis is legit and if someone runs with this and fills in all the factors with actual numbers, that would likely be a fun exercise – I’d love to see what the actual estimated ALE is.)
In FAIR (Factor Analysis of Information Risk), the expression of risk is in the form of the “Probable frequency and magnitude of loss” due to a weakness. It is a useful model for many things, including identifying whether a perceived weakness is actually a weakness in terms of your own appetites and, interestingly, what effect controls have on your risk and how they affect each other.
I’m not going to introduce the whole model here and lot of this is paraphrased, so a quick overview is probably helpful. If you’re not familiar with FAIR, start with Risk Lens’ “FAIR on a Page” (it’s also an open standard, which you can read about at the Open Group’s website).
Moving on: In the model (an ontology, see above), you have a “Loss Frequency” measure and a “Loss Magnitude” measure. Each of those measures is determined by a number of factors. These factors include things like “Threat Event Frequency” and “Vulnerability”. Each of these factors are themselves composed of other factors (e.g. “Contact Frequency” and “Probability of Action” combine to estimate “Threat Event Frequency”).
On the right side of the ontology, “Loss Magnitude” is split into “Primary Loss” (costs that happen every time a threat event is successful) and “Secondary Loss” (costs which could occur for a subset of Loss Events and which come from impacts to other stakeholders not accounted for in Primary Loss). Think: A bank loses customer data and has to do incident response (primary loss) and sometimes must also provide free credit monitoring to those customers (secondary loss).
While this model is clearly geared for “information security”, it’s actually pretty useable for reasoning through other “risk” scenarios.
For instance, why is the Trump executive order on banning immigrants….dumb? Can we express this clearly and show the interplay between factors that gets us to “dumb” in a rational way? Yes! Ill walk you through it! Refer back to Risk Lens’ “FAIR on a Page” if you need to:
The first thing we need to do is determine threat event, threat community, loss events, and assets. Based on the executive order, we can say this:
- Asset: US interests inside its borders
- Threat Community: Anyone from one of the 7 countries
- Threat Event: Attempted Terrorist Attack inside US borders
- Loss Event: Successful Terrorist Attack inside US borders
- Secondary Loss: The consequences and responses to a successful terrorist attack inside US borders (this is important)
So, with that said, let’s run through the ontology:
1. Loss Event Frequency: None. We have never experienced any successful attacks inside US borders from this threat community here and there’s no evidence to think it will soon. “0 Loss Events” x “Any magnitude of loss” = “No Loss, No Risk”. Normally, if we just wanted to know the immediate and probably future risk, we could stop here: We know how often loss occurs and we know that an executive order cant reduce “0” any further. But we also want to examine what could change and if the executive order will help manage THAT totally speculative future problem, so we will look below the surface to…
2. Threat Events (Loss Event SubFactor 1): Because Loss Event Frequency is so low, it doesn’t really matter how often folks TRY to cause loss if theyre rarely successful (NO control can keep the level at “Never successful”). But perhaps of Threat Events go up enough, we will experience a noticeable increase in Loss Events? This makes sense…but to understand whether this will happen and whether the executive order will help, we have to understand the two Threat Event Factors:
A. Contact Frequency (Threat Event Subfactor 1): How often is the threat community in contact with US assets in the border? Pretty often! We have lots of folks from all over the world in this country. Trump’s Executive order keys in on this factor in particular. If we reduce contact frequency, we will reduce the number of threat events, which will reduce the number of loss events! (From..again… “0” to…less than 0?)
B. Probability of Action (Threat Event SubFactor 2): We do have a Threat Community that comes in FREQUENT contact, but they rarely, if ever take action. In fact, given how high the contact frequency is vs successful threat events (ie, loss events), we can say the Probability of Action is VERY low (if it weren’t low, we’d be seeing more successful attacks) – so low that, unless something changes to increase probability of action, we are at very low risk from this threat community (remember this for later: What could change the probability of action?)
3. Vulnerability (Loss Event SubFactor 2): Ok, so…that’s Threat Event Frequency…. What about our Vulnerability in the face of Threat Events? Maybe theyre trying a lot and just not successful? If that’s the case, then Trump’s executive order STILL doesn’t have an impact, because it attempts to control for “Contact Frequency” and does not affect either of “Vulnerability”’s two factors: Required “Threat Capability” and “Resistance Strength”. The Executive order doesn’t increase the tools/skills needed by bad guys and it doesn’t make our Assets (people, infrastructure) particularly more resistant to attacks in the fact of someone who chooses to take action. We MAY be able to improve this, but Trump’s order doesn’t speak to it and given the low Loss Event Frequency it probably isn’t necessary for this Threat Community
There we have it: Loss Event Frequency – very low and Trump’s order doesn’t really speak to it anyway. What about Loss MAGNITUDE? Here is gets interesting:
1. Primary Loss Magnitude: This is what the immediate aftermath of an attack would be for the US. If anyone gets in and does damage, Trump’s order does nothing to minimize that magnitude or impact. Once it happens, it happens. Having a smaller Threat Community doesn’t make the pain or cost less later. So this is a null factor.
2. Secondary Loss Frequency and Magnitude: Woah! :) Here we have a problem. Because it turns out Trump’s Order IS A SECONDARY LOSS in FAIR terms. It is a RESPONSE to prior terrorist attacks. Because we were attacked, someone used FEAR (not rationality) to justify keeping folks out of the country. Families. Children. Scientists. Injured. It was insensitive to entire nations of people who were Not already likely to take action (based on the Loss Event Frequency analysis). But, what happens if you find your son or daughter couldn’t get medical treatment because of this? What happens if you find your family split up over this? What happens if your radical organization can turn to you and say “Look, we were right, these people are assholes and don’t want you”? What happens is THE PROBABILITY OF ACTION INCREASES FROM NOT ONLY THE ORIGINAL THREAT COMMUNITY, BUT OTHERS. In other words, people otherwise unlikely to take action before are angrier as a result of trumps actions and are now more likely to act against the US, thereby increasing future primary and secondary losses.
Summary (in case it wasn’t clear):
- Trump’s order does not minimize our vulnerability
- Trump’s order reduces the “Contact Frequency” of a threat community who has never been a demonstrated source of losses to the US internally
- The only significant impact on US risk that Trump’s order actually has is that it likely INCREASES the probable future frequency and magnitude of loss to the united states (it’s risk) by increasing the probability of action factor without affecting the others either way
Conclusion: After a FAIR Risk Analysis of Trump’s order, it turns out it was indeed DUMB.
Please find, linked here, a VERY preliminary draft of the “Model” component of my “MVC Approach Based Cyber Security Framework” (informally and with tongue firmly in cheek titled “#NISTCSF-BSIDES” as well).
Everyone, I’ve been working on these ideas for a very long time, but I have not yet had time to write up the possible 50 pages of the justifications, explanations, and experience which has driven this or the many ways it provides specific values and solves specific real world problems I’ve encountered.
But, in the interest of time – and to keep perfection from getting in the way of good – I thought it was worth throwing out to you all to see what you think without all of that supporting material. It’s worth noting that I’m intending here to provide a base model for a complete framework and that there are at least two levels (equivalent coding concepts of controllers and views) that should be based on this that would actually be what end-users of the framework see and use.
Frankly, I’m almost relieved to finally have something this coherent on paper, even if it might never be finished. I hope it ends up being useful, though. I only ask that if you find it so, please engage me so that (at least) I can convey much of the other support concepts that are not yet represented.
Take a look and let me know? Even if you think it’s dumb. :)
(This post is significantly “Draft”. I’m just out of time to edit it more and I figured “sooner” was better than “later”)
As many of you know, I attended NIST’s first “working” workshop to develop a National Cyber Security Framework (CSF) last week. This post is more of my own critique/comments on the process than a summary, but I’ve included some structural commentary as well below.
So, first thing first: thank you NIST, Facilitators, Participants, and the White House. This is a huge opportunity for us all to get our “stuff” together on a very short timeline in a process that has more than it’s fair share of detractors. The fact that so many(400+? What was the final total?) came together to work is impressive and encouraging. I’m happy and honored to be able to provide my $0.02 (Ok, I do keep hijacking the #nistcsf hashtag, so maybe $0.04).
NIST: National Institute of Standards and Technology, part of the Department of Commerce; A solid organization that does good work. One experienced with their products can see why the White House tasked them with facilitating the development of a National Cyber Security Framework (CSF). They are good at getting people together, soliciting input, and putting it all together nicely. Unfortunately, what I am not sure they are good at is what is needed most – the ability to break new ground. (Although I’m happy to be wrong here, I haven’t seen it myself. Correct me if I’m mistaken?). It’s built into the name…the word “Standards” implies something proven, known, and defensible. The (collective) cyber security disciplines, however, are still the antithesis of those. We have yet to regularly build secure systems and so I fear that an organization looking for *effective* “standards” in this area at this time may be set up for failure. Still, there is time.
Workshop Mechanics: NIST handled the mechanics of gathering input nicely. We were organized into 8 groups of people who rotated (in the same groups in order to encourage consistent dialogue) through 4 different topical tracks (derived from RFI responses)over the course of three days. The tracks were (and I’m summarizing): Threat Management & Info Sharing, Maturing Cyber Security, the Business of Cyber Security, and Cyber Security Dependencies & Resiliencies. The facilitators (including Bruce Potter from the Shmoo Group) by and large did a good job eliciting input from us, even though they had only been engaged a short time before the workshop and did not all have experience in the national critical infrastructure dialogue. The real limiting factor from a facilitation standpoint was not the facilitators, but NISTs philosophical process approach to meeting the executive order’s requirements.
Workshop Philosophy: Early on, I (and others) were previously critical of a “lack of rails” provided to participants to structure the dialogue (and I still am, but I’ll hit that later), but I spent some time time talking to both facilitators and NIST staff and feel more comfortable that I understand what they are doing and why. Their perspective is that they need to gather all the “pieces” of the puzzle from industry first without applying any inherent structure and then, in later workshops, analyze all of those pieces to find and useful underlying structure or concepts in the data, apply those to the framework, and then fill in the meat. In many situations, this is a good idea and I understand why they are doing it. The biggest takeaway here is that NIST thinks we shouldn’t get too bent out of shape by or read too much into the questions being (or not being) asked at this stage. The insight, creativity, and thinking will come later (according to them). In other words if you think (as Russ Thomas also does): “Clearly, most ppl running #NISTCSF process don’t see innovation as essential. Their focus is curating ‘best practice'”, then you are right at this moment in time…but perhaps also wrong in the long term.
Workshop Participants: Everyone-ish. Heavy on electric, industry association, academia, and vendor perspectives though. Low representation from other sectors. I’m wondering if (really, can someone answer this?) all the other sectors are busy supporting DHS Executive Order work like the CIIDWG (Critical Infrastructure Identification Working Group) instead of the NIST Framework process. I’m not sure why they would – the efforts are complementary – or why electric chose to show up – but there were clear biases in industry. Is NERC handling the other EO work on behalf of electric and so the only way for asset owners to get involved in the process is via the framework? I don’t know.
Suggestion for NIST: Add a “class” ahead of the next one so that everyone participating has the same background in what’s going on. Everyone came into this from wildly different backgrounds on “critical infrastructure protection” and it took a long time to normalize them (which never actually completely happened).
Process Observations: The process went “ok”. The main observations I have are related and center around dialogue maturity. First, the same issues were brought up in every single track – whatever the topic. This denotes the participants’ fundamental lack of a structured, coherent model of the problem space in their minds. The dialogue did *improve* over the three days as people (including myself) got their hot topics “off their chests” and began to really think about what they were being asked. But, in my mind, without appropriate structural/conceptual rails, it would take *months* of this before a group this large got past their pre-conceptions and tribal knowledge and actually got down to truly productive “thinking”. I understand why NIST did what they did, but I think time will show it to have been a mistake.
Observations On & Suggestions For Content: For the most part, responses hit a ton of different topics and I think it’s out of scope for me to comment too much on the actual discussions. Others – and obviously NIST – will do a better job of that than I would. However, I thought three items were of interest:
1. “Lensing”: For those of you who have seen my SOURCE Boston 2013 presentation from a month ago, you’ll be familiar with the concept of “lensing” cyber security into focus for different audiences. I had never heard of or used the term before until Ms. Jen Giroux and I talked about it. What I found interesting was that the term was used by NIST staff to describe how, at the end of the process, there would not be some final product as much as a bunch of information “lensed” for different audiences. Maybe it’s just coincidence, but maybe all my mouthing off has had some positive impacts. :) Hope so.
2. “Reasoning Tools vs Standards”: Is the framework going to just have static standards and controls, or will it provide reasoning tools (or be a reasoning tool) to help “make things better”? There was no clear answer and dialogue seemed to suggest a little of both.
3. “Defend vs Improve”: Few participants or facilitators seemed to talk much about about changing the strategic relationship between defenders and attackers. There was little acknowledgement that simply “defending” against the bad guys will result in ultimate failure – a) they will simply out-resource us and b) As we add complexity, unless we improve processes, we will make more and more mistakes implementing security.
That said, and in support of the lensing idea, I suggest something similar to the following diagram be used in the upcoming feedback evaluation process to help organize answers:
Short Topical Notes:
1.Several participants brought up “Control Systems vs IT” and ask if “ICS will be covered”: Of course the NISTCSF will deal with control systems issues. :) It’s being developed (among other things) in support of the executive order’s outcome-based perspective on cybersecurity (which, also of course, would naturally consider control systems since their failure tends to create significantly bad outcomes). The level in the security stack at which both the EO and the CSF operate at is higher than the level at which the distinctions between IT/ICS happen and so are inclusive of both without the need to distinguish between the two in policy statements.
2. Several people also wanted to limit dialogue to “cyber only” and ignore “business” or other non-core cyber considerations completely. I think these suggestions show an unfortunately poor grasp on the problem space -pThere is no such thing as “cyber security only” and our core failings have been in the broader set of activities beyond core cyber.
3. Lots of people said “lexicon”. Lexicon is only a component of what we need: A class-level Ontology. Think of an ontology in this sense as a high level set of relationships between concepts that fit together to define what we mean by cyber security. A lexicon contains the words and definitions we use to implement that ontology in real life.
Summary: Good Process, Missing several key concepts (from participant input and facilitation), I hope they get included later.
Poor Problem Identification, A Boats vs Airplanes Parable: A community agrees that they need to travel across the ocean better than they have been. They’re experienced in building boats, but they feel they need to do better because many boats sink and there is an overall feeling that they’re too slow. So, they get all the boat builders together, lay out all of the best practices, and then try and figure out how to build the best boat possible. Unfortunately, at the outset, they did not identify “too slow” as being “more than 24 hours” – there’s no way a boat will *ever* cross an ocean that fast. Unfortunately, the best solution – heavier than air flight – is not well known. Few in the community are aware of the concept and all of the questions being asked center around boat-building any way. Their efforts are doomed to fail – no one will ever be happy with a boat – but they don’t know it. If instead, however, the community had realized that the existing models – no matter how well perfected – would never meet their requirements, perhaps they would have gotten the boat builders together and provided some rails: “A boat won’t work, we have to get people safely across the ocean in less than 24 hours, but here’s how flight works – apply the same problem solving skills you’ve developed as boat builders to the problem of building aircraft.” But they didn’t. This is how I feel the NIST Framework process is proceeding. We are strategically approaching cyber security “wrong” and continued use of current models will assure that the bad guys will continue to retain the advantage.
Parable Applied: Conceptually, I like the idea of gathering everyone’s thoughts, laying them out, seeing what the relationships are, and then identifying good framework components and structures. The problem here is that over and over again I’ve seen – while helping to shape government perspectives on cyber security, working with an entire sub-sector at once on cyber risk management as a fed and later as a facilitator, and trying to sort out industry confusion on other cyber topics – that our haphazard models for security are neither well understood nor particularly effective. Everyone knows what a saw does, a hammer does, a screwdriver does, but no one has *ever* defined what a boat is, and even then, what we really need is an airplane. The boats we’re building – even the best of them – keep sinking and are too slow. As NIST goes out and gathers data, I hope they realize that the patterns they’re looking for probably don’t exist yet and that any framework stemming from current best practices will only exacerbate the problem, not make it better. This is because if we spend all of our time, resources, and good will trying to build a better boat, there will be little left later for the airplane we need. Making boats flight-worthy (as opposed to airplanes sea-worthy) is very, very expensive and hard.
Parable Plus Scale: I’ve also noticed very little emphasis on the problem of “sustainability over time”, which is a large issue when dealing with current levels of scale and increasing complexity. What I mean here is that many (most?) folks (not all) seem to think in terms of “fixing things now”. Many seem to believe that we just need to get everyone up to a certain level of security *now* and we can go from there. The philosophy of “fixing it now”, though, is actually one of our most serious vulnerabilities. “Fixing it now”, in our current environments, takes a *huge* amount of time, resources, marketing, commitment, thinking, etc. (for organizations or the nation, depending on scope) to get everyone together to figure out what “the best answers” are. Unfortunately, by the time we finish the process to anyone’s satisfaction, the world has changed and our solutions won’t work. What we need are not answers to how to secure ourselves now, but answers to how to make the process of deriving and implementing effective security concepts fast, accurate, and efficient enough to keep pace with the world in a way that a) won’t blow out all of our resources and b) will have a high level of quality in implementation. We need to create tools and cyber/non-cyber environments that will shorten the overall cyber security lifecycle, simplify the process, and reduce errors. The Framework must aim to aid the derivation of answers, not provide them. Better derivation tools will help us figure out what we really need and maybe we’ll learn to build air planes instead of boats. I really wish I had seen more of this perspective from the organizers and the participants.
Final Words: If we do not change our entire approach to cyber security, if we do not learn to dramatically adjust what we do based on failure, if we do not handle the issues of scale/complexity, the bad guys will continue to win. We will run out of money, time, and will if we keep walking down these same paths. I sincerely hope the next NIST Cyber Security Framework development workshops will take these realities into consideration as part of the process. Please read these two posts for further exploration of related topics:
Quick shout out to the people who made the workshop less dry with good conversation and good company: Lena Smart, Andy Bochman, Mike Dahn, Fowad Muneer, and the CMU REP Team. :) Was good seeing/meeting you all.
This year, thanks largely to Josh Corman, I had the opportunity to speak at Source Boston. It was an interesting experience and the first time in a couple of years I’ve had the chance to talk in front of a general security/hacker audience (Bsides Chicago was the last) – vs one focused on critical infrastructure specifically (like a NATO conference in Tbilisi, Georgia). Thanks Josh. Also, Thanks Jen Giroux for helping my lens myself – your perspective was crucial.
More important than my talk are the slides themselves. I managed to put together one of the only presentations you’ll find with relatively short summary of the critical infrastructure landscape that also provides some framing help and advice on how to approach the topic more effectively (See this post for a longer treatment of the executive order sections). It’s meant to have a strong verbal component, so if something seems incomplete or your need more information, feel free to ping me. I hope you enjoy. (PDF HERE)
(Consider viewing these full screen)
To Whom It May Concern –
In response to this RFI, rather than suggest specific content, I would like to bring NIST’s attention to several conceptual perspectives that I believe have so far been underrepresented in the discussion so far.
Perspective 1: A Need for Common Conceptual Framing
First, I believe the potential value of a successful framework will not be in the content, but in the conceptual model the content is organized around. One of the primary problems facing us as individual organizations and as a nation is the not only the lack of a common cyber security lexicon, but also significantly incomplete and often incompatible views as to what comprises cyber security itself. This point can be illustrated in two ways:
- After attending the recent NIST Framework Workshop, it was evident that many speakers were discussing only component pieces of cyber security (e.g., information sharing), and not the entirety of the problem (e.g., procurement). The result was a grab-bag of security ideas that could not be evaluated in terms of each other or their role in security as compared to the rest of the ideas shared. The discussion lacked the structural and conceptual rails required to guide the participants down the path of solving the same problem. I was left wondering “How does this all fit together?”.
- One of the critical infrastructure sectors recently asked their pertinent government agencies (there were 4 represented) for guidance on which federal tools and frameworks should be used, by whom, when, and why. Industry believed the tools lacked appropriate descriptions. After investigation, the fundamental issue was not that the tools lacked descriptions, but that those using them were not aware of the full scope of problems which needed solving. Participants lacked a common, complete conceptual framework in which to evaluate the tools. This lack of a broad, structured, conceptual model made it difficult for them to assess or use other content.
These are only two examples of many. This is a problem that occurs in almost every cyber security dialogue – even among cyber security SME’s. For this reason I believe that one of the primary values of the NIST Framework should be in providing that common view – not only of security practices, but also how those practices fit together to reduce risk. One might call it a “cyber security algorithm” where program, practice, and control domains are variables which must be used to solve for “assured risk reduction”. In such a model, individual best practices and content elements can be tied to each “variable” and can be selected by industry. This provides some assurance that they are all working coherently together.
Such a model could conceivably be broken down into six different layers of activities (national, sector, business, architecture, implementation, operation) broken into two dependent but different risk life cycles: Strategic (risks from cyber systems) and Operational (risks to cyber systems).
In this manner, the structure of the NIST framework could be used independently of the content to educate readers, assist them with communication, and be helpful as a tool to solve for specific cyber security outcomes.
Perspective 2: Non-Cyber Business Maturation and Foundations
In my experience, many organizations would have very successful cyber security practices, but their extra-cyber practices are not able to effectively use or support the good cyber-specific ones. These extra-cyber practices include procurement, marketing, scheduling, business operations, development, testing, sales, database administration, communications, etc. It is often said that “good security isn’t bolted on, it’s baked in”. That is only partially correct. Good security is good business – there is often little to distinguish the two. Security usually fails long before anyone with “information security” in a title or department name is involved. As such, I believe the NIST framework should focus more on identifying good business practices which lead to successful cyber security than on cyber-specific ones. It should also keep in mind that those most in need of the framework are the least likely to understand their own role in the cyber security problem domain.
Perspective 3: Quality Assurance & Human-Centric Cyber Security
As we have seen many times now – in the cases of some large and well known security breaches of organizations who were fully aware and invested in cyber security best practices – the problem we are facing is not just one of knowledge, but one of consistency of practice. It is relatively difficult, the way we do business today, to assure the application of best practice (whether through internal business incentive or government regulation) in a consistent manner. The NIST framework should attempt to improve this consistency.
One aid in achieving that consistency is identifying where cyber security faults – which are really just errors made by a human in an authorized role somewhere on a timeline – are occurring and describing them in terms of human-role/authorized-action control pairs.
Examples could include: CEO/SuccessDefinition, Vendor/FeatureInclusion, Vendor/QualityAudit, ProcurementOfficer/ProductEval, Subcontractor/OrganizationBridging, ITManager/WorkPrioritization, etc.
Putting these pairs into a timeline or lifecycle model would allow us to describe desired cyber security state and control points in a manner that would: Be valid through most possible iterations of technology, allow users of the framework to better identify which best practices were applicable when and to whom, reduce cost by placing controls as close to the fault source as possible, and help increase consistency by more effective and efficient control placement.
In closing, I believe that the NIST cyber security framework has the potential to be an extremely valuable tool, but that its success will depend on its framing and structure. It must speak to non-traditional cyber-security audiences in their own voices and simplify otherwise high levels of detail in a way that enabled significantly better dialogue than we as a community have been able to achieve so far.
Thank you for your time and efforts.
Over the past week, I’ve had a number of questions from industry, people at various cybersecurity conferences, friends, and…well..my job….ask me about my opinion on the executive order. Here are some interpretations in the form of a FAQ. It’s worth mentioning that, although I am familiar with the culture, language, and *some* small number of the actual background discussions here, I have no ownership nor formal role in most of it. Just some wacky alien putting some other wacky aliens’ behavior in terms more earth-like. If I use definitives like “is” or “will” please read an implied “my best educated guess” into them.
1. What is the Executive Order and why was it issued?
This is a two prong answer. First, obviously it was absolutely a political goad to congress to write legislation and to poke at the Republicans. However, more importantly, it is also potentially a very valuable order that was seriously thought through and that will be used.
Think of it like a mother (the White House) telling kids (DHS, SSA’s) to “clean up the house”. Based on existing house rules (overarching critical infrastructure directives/laws), she expects it will be done and goes off to handle other things.She comes back to find out that the kids of swept once or twice then went on to xbox, pushed stuff under the bed, or made more of a mess of the toy box trying to clean it than it was before.
Mom comes back and says “Ok, I left you to your own devices, here are the specific ways – again within the larger context of house rules – you are going to clean up. In the case of cyber security, the White House has said: You – DHS and SSAs and everyone else – are going to remove barriers to information sharing, work with our customers (industry) to build some coherent approach to solving the problem to our satisfaction – some standard way of organizing the whole mess, and you’re each (especially you SSA’s!) are going to create explicit privacy and civil rights protections or else you fail.
2. What are the main thrusts of the Order?
1) Improve Information Sharing
2) Use business-function driven risk analysis to determine priorities
3)Create a framework of standards for reducing risks from cyber security issues to critical infrastructure
4)Engage industry to the greatest extent possible, and assure privacy and civil liberties are embedded in the entire process.
Whether any of this will be successful or remain uncorrupted is a different question.
3. Could this in any way infringe on individual freedoms if misinterpreted?
4. What will the “Framework” described be?
Based on comments from NIST: The framework will includewhatever will achieve effective cyber: processes, technologies, architectures, concepts, specifications, etc. It is intended to be layered and include broad principles, common practices, and sector specific realities.
The role of NIST is to support the industry development of the framework. The government will depend on the actions of the private sector after sharing, up front, performance goals. NIST is being engaged because it has experience gathering lots and lots of input, but this will NOT be a typical NIST thing.
The aim of the framework approach is to enhance adaptability, with cost and impact to economics of business being an integrated explicit part of the conversation.
Additional benefit is that, by increasing interoperability of requirements, concepts, expectations, etc, baseline security can be driven to market/products (my comment: which has been a vendor/industry complaint often voiced)
Moreover, a goal of the EO – both in context of information sharing and the framework – is harmonization of efforts (this was repeated extensively and resonated with my experience in the dialogue) – particularly nby the federal government (which, again, has been a substantial private industry complaint).
5. Standards? What is meant by standards? That sounds scary!
Not as much as you’d think. Based on comments from NIST: Generally, common basis of comparison…some are performance…but some are norms to promote collective collaborative action. These latter are developed by industry and what the EO is referring to. In other words, the Framework of Standards is meant less to be comparative and more to allow everyone and everything to be working together. (Jack’s note: I’ve said for years there should be a Chinese menu of options selectable by environment and risk, this looks like it might be going down that path).
6. What are some simple things to know ahead of time that I might not already?
There are laws, mandates, and programs on the books now and have been for years. This includes strategic planning, incident response, information sharing, and engagement. The sector specific agencies’ jobs(SSA) are to take broad cybersecurity capabilities within DHS and apply them in sector (industry) specific ways. All major players in industry have been actively engaged in the dialogue so far. There have been certain cultural, process, political, perception, legal, and conceptual barriers to progress despite existing work and engagement. The Executive Order attempts to rectify these barriers while keeping in tact most of the fundamental structures already in place.
7. How does the new PDD relate to the Executive Order?
The PDD is an update/replacement to HSPD-7. These documents are not cyber specific, but are the policy context under most critical infrastructure protection activities that the federal government engages in (including cyber security) are driven by. The old HSPD-7 and the National Infrastructure Protection Plan from DHS which supports it have been around for years and understanding them is necessary to understand a lot of the intent behind the executive order.
8. What is an SSA, as defined by HSPD-7, the new PDD, and the NIPP?
SSA’s (ref’d above) are the sector (Energy-DOE, Transportation-TSA, Chemical-DHS, etc) specific agencies who are the functional owners of engaging their segments of the private industry in gov cyber security efforts. The EO and the new-PDD update their responsibilities from what they were under the old HSPD-7, but they’re similar. For reference, a paraphrased overview of the old SSA responsibilities is to:
- Use mechanisms like Critical Infrastructure Partnership Advisory Council (which allows gov/industry cooperation) to bring Sector Coordinating Councils (made up exclusively of non-lobbyist private industry) together with Government Coordinating Councils (Sector Specific Agency points of contact) to work together on planning the reduction of risk
- Encourage organizations with information to share with those who need it and encourage development of information sharing programs and mechanisms
- Promote education, training, and awareness within industry in coordination with other government and private sector partners
- Identify, prioritize, coordinate federal Critical Infrastructure Protection activities in sector – ie, make sure the government is organized and doesn’t overburden the private sector
- Appraise congress of industry’s current status and progress in reducing risk, based on engagement and feedback from industry
- Increase integration of cyber security efforts with other all hazards protection and response programs – in other words, since cyber attacks can have physical implications, make sure first responder type organizations are working with cyber ones
- Develop and implement sector risk management program (within the government) and framework and use to determine risk priorities of sector and coordinate (not require) risk assessment and management programs with industry. This means create a process by which, facilitated by government, industry can get together and figure out where it is and what it’s priorities
9. How does CISPA relate to this?
An executive order cannot change already legislative assigned federal responsibilities, so everything the EO directs occurs under existing mandates and laws. Further, the EO addresses information sharing AND getting the government’s overall act together in cyber security. CISPA, on the other hand, is aimed (for better or worse, this post isn’t for my opinions on it) on removing legal barriers to information sharing and addressing specifically problems associated with industry cybersecurity needing to intersect with the intelligence community.
10. What guarantee do we have to transparency in any of this?
Workshops kick off in April. NIST has questions to industry on its website and will be reaching out further (more proactively than “on the website”) in the near future. If you read my earlier NIST post, you’ll see transparency and participation are core, not tangential, tenets here and are one of the things that will (or is intended to at least) distinguish this from past efforts. Further, if you have been on any of the DHS calls with industry, every single conversation revolves around getting more and better industry involvement. They are very serious about it. Finally, in my own work with some of this (which is tangentially related), transparency and engagement have been priorities I’ve seen.
11. Indeed it’s written with the basis that Government will continue to be the determining data librarian for cyber threats.
Over and Over and Over industry tells gov “we need better threat info”. Most of EO not dealing with the framework is written to that end – it primarily deals with pushing data TO the private sector because they have requested it. However, post-order messaging has (correctly) been: Look, we don’t have a classified pot of information at the end of the rainbow that’s going to save the day. Industry, you guys know about yourselves way more than we do – or you should. If you don’t share, that’s fine, but we can’t help you unless you help us to do it.
I don’t like the disproportionate focus on Information Sharing. I think it’s a waste of time, but we collectively have created this stupid beast. I might be a red herring, but it’s our collective red herring.This deserves a longer treatment than a couple of sentences, so come see me talk about it at SOURCE Boston
12. Why is the Cyber EO so obtuse? And while the PPD adds context – it’s clear that we require more (and more) clarity
Much of the obtuseness is because a) some is to be defined later by b) federal agencies who will get very clear direction from those in the WH charged with implementing the EO within the context of c) existing language on the books and in response to d) specific beefs from industry and dialogue failures in the past. What most people lack is the appropriate context from which to interpret it, since most people are not critical infrastructure owners and operators or feds who have been engaged in the discussion. Much of the insight Im trying to provide here isn’t direct experience with the EO iteself, but the cultural language which has developed in the civilian space on the topic of critical infrastructure protection over the past several years. It’s not understood well outside of Washington, but those it is speaking to understand it. This is a huge problem and one I’ll try to address in Boston
13. Is this more of the government telling private sector they’re coming?
Gov’t is already there: HSPD-7, NIPP, SSA’s, CIPAC, CSCSWG, CNCI, NCCIC, foobar. Regulatory capability already there: TSA, DOE(NERC CIP), etc. This EO speaks to and sorts out this *existing* stuff in one prong and tries to sort out information sharing barriers in another prong (barriers which, right or wrong – mostly wrong – industry has cited over and over and over as the reason their cyber sucks)
14. Why do we have any faith that Government has the agility and consistency to get it right this time?
We don’t. but, the way the framework components are laid out, we have an interesting opportunity to force it to work by the order’s focus on creating real consensus business-driven requirements. In particular, I believe cyber security is a quality assurance problem over unbounded time driven from business priorities and is almost 100% a human-centric problem. There might be space here for that conceptual shift to occur. More on that later, possibly in Boston
15. Should the Cyber EO have been so broad? Look at the “Designated Critical Infrastructure Sectors and Sector-Specific Agencies” list in the PPD.
Don’t forget that the PPD is based on years old definitions and, more importantly, is an all-hazards list primarily focused on physical attacks. In large enough scale, most things are critical in the terms of the broader discussion.
The trick is, for cyber, determining what within those spaces is critical. It’s a different functional discussion – as this is all laid out – than which sectors are critical. That’s handled in a process – a version of which I’ve been facilitating at a sector level for the past year – that is designed to base decisions on business driven threat scenarios. It’s not perfect, but it’s a huge improvement from past methodologies.
16. If and only if (IFF) the Cyber EO was really meant to get action to answer these questions – then it should not have been issued so broadly, so politically charged, and otherwise tied to SOTU the way it was.
Agree. It’s over-politicized – but that gets into questions of its effectiveness and clarity in the current political and cultural environment, and that’s out of scope here.
17. Why not leverage the bodies of work existing up-front?
Because the process of engagement in finding and applying those existing bodies of work is the key element of this part of the EO, not the outcomes themselves. It’s an attempt to build in continuous flexibility and applicability in changing environments and compared to differing and dynamic priorities. Think “it’s not the destination but the journey” here and add on “and the requirement to iterate through multiple journeys as a lifestyle”. The mechanism NIST and the collective gov builds to continuously engage industry in the development and adaptation of the framework are where our real opportunities to make this valuable come in – but we need to work together coherently. More in this in Boston.
Also see this document from NIST: http://www.nist.gov/itl/cyberframework.cfm
18. What makes this a compelling DHS issue instead of economic development, science, or other component of Government?
Because the EO can only really address already existing legislatively assigned authorities. This EO is a goad for further legislation, and that might change the agency assigned responsibilities. That said, I actually agree this should be a DHS issue – no other agency has the type of broader mission required to effectively coordinate cybersecurity in the broad terms it requires – NSA would be one of the worst choices, since their core mandates are, in many cases, only of use in terms of focused support. Think correlation with physical and geographically dispersed response and coordination. The FBI, similarly, would be a terrible choice since their mandate is “prosecute and convict”.
19. What about regulation of industry?
There are a number of agencies who *already have* regulatory authority over private sector critical cyber infrastructure – some have used it, some haven’t. The EO asks that they use the new processes in the EO to reevaluate whether they should regulate and how if they don’t now and the effectiveness of any regulation if it’s already in place. Every two years, the government is required to check with industry to make sure any regulation is a) effective and b) not too burdensome. In my opinion (based on work with some of the processes which will be used), this is much less likely to result in additional regulation than is suspected. (This is because the processes attempt to be more empirical and data-informed than the more speculative and subjective attempts in the past.)
20. Why haven’t I heard about any of this and why does it not resonate with me?
So much of this has been driven by lobbyists and industry associations….unfortunate in many cases…but almost impossible to get substantive input from more fair representation. The reasoning behind this is something I’ll cover in Boston and it’s something we need to culturally change together – and we can.