You are currently browsing the tag archive for the ‘risk’ tag.

UPDATE: Please use the following link for the current agenda. The one in the post is outdated: http://sintixerr.files.wordpress.com/2011/10/cyber-program_1020.pdf

Progress! As you can see below, we’ve confirmed several additional speakers such as Tony Stramella from the NSA and Steve Carmel from Maersk (who was a fantastic speaker last year – he talked about his experiences with maritime piracy and pirates! Did I mention he talked about pirates??).

The Offensive perspective panel (Kevin Finisterre, Ruben Santamarta/Reversemode, and hopefully Josh Wright) is going to rock out with some talented vulnerability researchers and Mark Fabro will do his always brilliant job of improving the discourse. 

We’ll be excited to hear Bryan Sartin discuss the past year’s data breaches and front-line experts in the field let us know how the stuff you’ve heard in the news might apply to you (Scot Terban, Liam from Symantec, and the now-short-haired Adam Meyers). 

Boeing and Darryl Song from Volpe are going to dish on transportation-specific concerns, and the CTO of the CIA will drive home the need for security to be data-centric. 

Mike Murray will be both entertaining and captivating – even if I dont know his talk yet – and Russell Thomas will bring a much needed formal perspective to risk management and cyber security. 

Patrick Gray gives a lightning fast, but insightful presentation on social media, Jack Johnson will help us understand financial issues facing organizations today, and Amit Yoran will talk about…whatever. He’s just a smart guy.

Hope you can make it. If you’re interested in attending, the registration link is here: Invitation.

(Please, if you’re a vendor and plan on selling, we’ll take a pretty dim view of that at this particular conference. )

November 1

Talk

Speaker 1

Speaker 2

Speaker 3

Moderator

Introductory Remarks

Dr. Emma Garrison-Alexander, TSA CIO

 

Keynote

Anthony Stramella, NSA

Verizon Data Breach Incident Report

Bryan Sartin/ Verizon Business

Break

Industry Case Study 1: Boeing

Mike Garrett/ Boeing

 

Panel: Offensive Perspectives

Kevin Finisterre

Ruben Santamarta

Josh Wright (Tentative)

Mark Fabro

Lunch

Social Media

Patrick Gray/Cisco

 

Panel: Maritime

Steve Carmel, Mearsk

RDML Robert Day, USCG

RADM James Watson, USCG

TBD (Speaker)

Break 1B

Panel: Threats in the News

Scot Terban
(Anonymous)

Liam O Murchu
/ Symantec (Stuxnet)

Adam Meyers (APT)

TBD/ Industry

Industry Case Study 2: Transportation
Control Systems

Darryl Song/ Volpe

 

         

November 2

Talk

Speaker 1

Speaker 2

Speaker 3

Moderator

Introductory Remarks

TBD

 

Keynote

Vice Admiral Parker/ USCG

DHS CARMA

TBD

Break

Panel: Executive Perspectives

Amit Yoran/
Netwitness

Gus Hunt/CTO of CIA

TBD/ Industry

TBD/ Industry

TSA & DHS Joint Sector
Collaboration

TSA Cyber security Awareness &
Outreach Branch

 

Lunch

Users & Awareness

Mike Murray/MAD Security

 

Industry Case Study 3: TBD

TBD

Break

Panel: Risk Management

Jack Johnson/ PWC

Russell Thomas

TBD/ Industry

Jack Whitsitt

Industry Case Study 4: TBD

TBD

 

Say you want to buy a car to take your 5 kids and spouse around town. Now, suppose you start looking for a good, safe van with low gas mileage that fits the whole family and is relatively cheap. $20k? sure.  Ok, now what if you go out to buy this van….but oh no! All you can find are corvette dealers selling $100,000 cars!!!

Would you buy a corvette? Hells no. You’d wait until you found something that met your minimum requirements: Moving the family around. If you got the vette, you would have gotten something that, even if it fit “some” of your requirements (moving some people around), doesn’t  fit enough of them to actually solve the problem. Furthermore, if you did get the vette, you probably wouldnt be able to afford the van so your problem would go on even longer than if you hadnt gotten the corvette.

Welcome to the kind of security that says “we should do more of what we’ve been doing, even though we know the architectures don’t work…because something is better than nothing.”  We can’t continue to add on layer after layer of security at ever increasing cost when no number of those layers, as modeled today, will ever get us to a comfortable place.  Getting owned by X% fewer people is still getting owned and doesn’t really change your risk profile unless X is a much bigger number than today’s most common best practices get us.

Nothing is ever perfect, so I’m not suggesting no one should take action until they find a perfect solution. Rather, I’m suggesting we all take a close look at our solution sets and look at how good they’re ever going to get at the end of the day and make decisions appropriately. When selecting a “50%” solution architecture for $Y, dont get caught thinking $Yx2 will get you a 100% solution with the same architecture:)

I normally don’t have much to say here about my day job (partly why you’ve seen more of a focus on art), but I thought (since I’d been previously linking to the DHS Control Systems Security Program pages) that it was worth mentioning that ICS-CERT has its own website these days: http://ics-cert.org

Take a look at it if you’re in the control systems / SCADA and security/emergency space (particularly with regard, but not limited, to cyber).

Edit/Update: Now that I’m no longer there, I do have a brief take on the subject and a summary of information HERE

So I was sitting in a critical infrastructure cyber security talk earlier this week and had a small revelation.  The talk itself wasn’t all that interesting – it was another attempt to collect and identify consensus best practices for critical infrastructure security from a governance point of view – but it still led me down a path that surprised me.

The authors of the paper being presented had done interviews and other research and derived a number of principles required for critical infrastructure cyber security governance based on what they commonly heard over and over. At the talk, we had break-out sessions where they were pinging us for our thoughts on their findings.  During the session, I realized that I’d heard it all before (obviously, right? It’s a consensus paper) and was wondering why we couldn’t get past the stale “wisdom” repeated ad nauseam without effect…when it hit me: the use of their paper might be directly opposite of what they might think it is, but it’s still useful!

The thought process is as follows:

  1. Assumption: We all “agree” that cybersecurity for critical infrastructure is insufficient and we’re missing something.
  2. Assumption: The paper represented the community opinion, to date, on what needs to happen for good cyber security
  3. People are trying to improve security, but despite sporadic improvements, we haven’t made nearly as much progress as we think we should. Something is missing.

Conclusion: Whatever it is we need to do …..isn’t in that paper.  If we collect a series of best practices and community consensus on a topic where we generally consider ourselves to have failed, collecting that consensus should be used – instead of as a driver of activity – a hint at what won’t, by itself, get us where we need to be. The lists should be considered things to exclude as solutions to our unidentified sticking points, but the solutions themselves.

If you’ve read some of my recent posts here, you’ll have seen that Im back working on creating data visualization pieces as art.  In the process of making these,  I was reminded again of the relationship between art and security and its practical implications for enterprise security efforts that literally dictate success of failure. Bear with me as I walk through the art piece first and then arrive at the security observations :)

First, to work, art has to have a solid concept. You might accidentally create a piece that’s appealing on some level if you just throw paint at canvas, but you probably won’t repeat that success often and observers will understand this.

Taking that into the realm of data visualization, you can make all the pretty graphs you like, but unless you do some leg-work ahead of time and massage the data into shape, they’ll be of little use and only may accidentally be visually appealing in a way that let’s you intuitively grok it.   (I think this is philosophically similar to some of what Tufte teaches, but I don’t remember for sure.)

For example, if I wanted to (as I did) visually represent the stimulus bill in a meaningful way on screen at once, I could really just use a microscopic font…or turn the whole thing into a jpg and resize it to fit on screen. But what would that accomplish? It would just be mush.  We wouldn’t have identified or accounted for inherent structural properties that we needed to keep to preserve order. We also wouldn’t have separated the wheat from the chaff – useless information would hide useful information. And we wouldn’t have manually added linkages between data points that would help us draw meaningful conclusions visually to account for a loss of resolution in individual words.

What would work, instead, is to turn (as I did) the Stimulus Bill into columns of useful information. You could convert the free form english structure of the Bill into a tabular format and add meta data about the text that I wanted to see in the visuals.  You could add line numbers, position in sentences, group words by sections of the document and add word counts, etc. All this would show up visually and present a much more useful visualization that would also, because of the new more conscious conceptual structure, be more appealing to look at.

So what does this have to do with security? Everything.

Recently, much has been made of the new SANS CAG control list. Basically, this is a list of “best practice” security measures and controls that, if properly done, will make the most impact in securing organizations. Where’s the problem? The problem is that none of these are new (except WiFi). They’ve all been around longer than I’ve worked in the field (7ish years) and probably much longer than that. Everyone who works in security knows them.  Most CTO’s, CIO’s, and CISO’s will probably not be unfamiliar with them. But yet, they’re either not implemented or, more often, they just don’t work.


If these really are best practices (and they are), but yet they’re not working, where’s the disconnect? I think it’s lack of structure. Most organizations do not operate their businesses in a manner that can be secured. There are inherent structural flaws (as in, there isnt any) in the enterprises themselves that conflict with and outright prevent security from happening – just like in art and visualizations.  No matter how much effort or money you throw at the problem, cyber/IT/technical security controls will get you nowhere quickly (if anywhere ever) without a properly run and organized business. What failed cyber or IT security really is, ultimately, is a symptom of failed Operational Risk Management.

If you can’t track assests, if you haven’t identified your key data, if you don’t have clear and measurable business objectives for IT and cyber systems, if you don’t have a clear line of sight between the risk of technical failure to business impact, your security controls -will- fail.

Why? Because an organization run without these things will consistently make poor decisions based on incorrect, out of date, or conflicting information. In other words, you have to build break points into the business to be able to check, measure, and change the the organization at key junctures in order to make good risk-based decisions.  “Risk-Based decision making” get’s bantered about like “moving forward” and “synergies” – but it’s not an empty phrase and it has real, concrete impacts and prerequisites.

Let’s look at a best-case scenario where everyone wants to do the right thing, but there isn’t an enterprise or business architecture in place. Everyone goes through an evaluation of need and risk, pick the right controls, put them in place. Hunky dory, yeah? Well, what happens when a new line of business is added? Nothing to do with security, right? What if the new line is taking critical data that wasn’t exposed by the other systems and making it public inadvertently? Would you know that? If you need to patch critical systems quickly to prevent a flaw, would you know which ones kept your business running? Would you have documented in an easily accessible manner the fact that your manufacturing systems depended on a feature that the new patch – which works just fine on desktops – disables? Etc. Not to mention that your IDS’s depend on this info, your firewalls, your SEMs, everything.  There is relatively little happening on your network that is inherently bad outside of a business context. There are many more (and probably better) examples…but there are two take-home points:

  1. Everyone with the authority to make changes to your business needs to be aware of the secondary dependencies of those decisions and how they intersect with security and inform others of changes they make
  2. If you try and do this without managed processes and without maintaing and continuously updating the information about the business in an architecture, you’ll fail. It’s too hard, too expensive, and takes to long to keep doing it from scratch. It’ll never be accurate, timely, relevant, etc.

Business leadership at all levels and in many (most?) organizations simply are making bad decisions that affect security.  It’s not that we don’t know, as security professionals, the right things to do. It’s that we can’t express it in terms of business risk and the business leaders typically don’t seem to have the structure built in to affect positive change throughout the organization. Build some good, clean structure with visible break points at critical junctures in your business flow and then security will start to become cheaper, easier, and more effective.

(Second Update: As of 9/14/2009, I’m working for Idaho National Laboratory (INL) liaisoning to DHS in DC supporting their ICS-CERT effort. This is reflected in the online resume, but not yet the pdf.)

Just a pinging post since I’ve just (finally) updated my resume on this site and elsewhere to reflect what Im currently doing at TSA.  Apparently, IDS analysts in this area are in hot demand, but that’s not really what I do any more.  Unfortunately, what I -do- do isn’t as easy to tokenize/categorize as something like that. I do love it, though :) I like…making stuff work better than it did before and do new things.  People, in particular.

Here’s a link to the PDF:

http://jackwhitsitt.com/whitsittresume02092009b.pdf

And online:

http://sintixerr.wordpress.com/jack-whitsitts-technical-and-security-resume/

About Me

Jack Whitsitt

Jack Whitsitt

National Cyber Security. Risk. Multi-Dimensional Rainbows. Maker of conceptual lenses. Artist. Facilitator. Educator. Past/Future Vagabond. Drinks Unicorn Tears.

Follow me on Twitter

My Art / Misc. Photo Stream

IMG_4842_2

IMG_4835

IMG_4834

IMG_4833

IMG_4830

More Photos
Follow

Get every new post delivered to your Inbox.

Join 35 other followers