You are currently browsing the monthly archive for September 2007.

Hmmm. I’d like to clarify my comments a bit here and respond to yours. First, SIEMs – if used correctly – have every capability required to make that guarantee. Rather, what’s lacking is the correlation architect’s (and even the vendors) clear understanding of what it is they’re building.

When you create SELECT rules without some sort of (at least) simple ontology, you can’t predict that you’re doing a complete SELECT. This is a lot like IDS signatures looking for a list of known bad stuff – it’ll never be remotely complete.

However, if you define your inclusive as well as exclusive processing, you can in fact have something smart done to all events. The way Ive built it in the past has been to create multiple correlation paths: Known Behavior, Statistical Core Processors, and Behavior Based Auto Classing.

Known Behavior rules are self evident.

In Statistical Core Processors, you create rule-series which perform one single statistical transformation per series (kurtosis, increase over average, etc). You must remember, though, to derive your measurements and environment boundaries from known variables, so you’ll need some sort of automated system to re-baseline those numbers as part of the preprocessing and feed those numbers into the system.

The statistical stream should be split into two by source: Events unprocessed in other correlation streams, Events processed by at least one of the other two streams. The split can take the form of tuple tagging (/correlated/knownbadtype/statisticalchange, for example).

When we say Statistical Core Processors here, we mean that only a basic, simple transformation is made so that later in the stream, once events have all been accounted for, you can combine the outputs of these simple transformations into more complex ones.

The third stream, Behavior Based Auto-Classing deals with the failure of vendors to properly label information (like whats SPAM, in this example). Classing (labeling) information should best be done based on its behavior vs some hacked on system by utilizing any known properties possible (not all scenarios have known properties…in those cases, the events will at least fall into the statistical processors). Example: Email ends up in an inbox that has never been used before is spam, by definition. IDS events triggered by this mail can be classed as “SPAM” events whether or not the vendor labeled the events as such: Theyre either directly related to spam or else are generic enough not to be useful in differentiating between one actual activity and another. So, an automatically generated list of events associated with SPAM is sent to ArcSight or another SIEM, and the SIEM then classes/groups/tags those events as SPAM…and that tagging can be criteria for other rules, reprioritized, displayed, or filter ed out.

In conjunction, these three correlation streams make sure that no events fall out, you have created definable, repeatable criteria for every event, and you have known properties and facts about every single event that goes through your SIEM.

Interestingly, this also means you have created a system that will automatically place objects into predefined ontological classes. Huh. Thats really cool. Ontologies allow us to automatically create knowledge out of data.

And I guess that’s my comment on your comment: SIEMS can be used for more than selective real-time analytics. I’ve used complex time/day of week algorithms to bucketize my events to create more accurate pictures of the state of the network over time in the SIEM – which is useful for in-depth analysis. I used a number of tools (visualizations included) to look for times of day/days of week where traffic patterns were typically the same and most statistical measurements were automatically derived by comparing only similar temporal buckets to each other. That helped get rid of the start-of-business-day and its-the-weekend effects on the values.

In closing, this need for better classification is universal and is a problem facing us in the modern world in any system designed to help or requiring humans to make decisions.


Anti War Protest: Hot Communists

Originally uploaded by sintixerr

The world is sort of funny.

You can be in the middle of a crazy, pulsating mass of humanity moving all around you and still be isolated from the event itself. You might speak to people next to you. You could shout out in excitement. You can bump and be bumped trying to find a better spot.

Still, most of the time, you haven’t affected things one way or another.

But, show up with an SLR and act like you belong there, and things change. Strangers look you in the eye and children smile. The world stops for the shutter and then keeps moving.It surprises me that the very act of documenting an event makes you a part of it. I’ve always heard about journalists and photographers not wanting (or at least openly) to be the news or the event and, yet, when they’re there they’re as natural as the weather.

Why is this? Is it because constant documentation and observation have become part of who we are as a culture? It seems maybe so.

Once, I heard that 1 in 10 Americans has been on TV. I wonder what the stats are for those that have been featured online in pictures taken by a stranger?

Weird. I’ve rarely felt more involved in the world around me than when Im an anonymous person behind a serious looking camera and it’s one of my favorite things, so far, about my exploration of the craft.

(Regarding the title of the picture that inspired the post: I doubt they were “communists”, but the bandanna thing at protests is so cliche’d -insert alternate political system of your choice here- at protests that it’s a bit silly)


Anti War Protest: The Letter A

Originally uploaded by sintixerr
Sweet. While I might be a bit behind the curve here, I just discovered that I can blog straight from Flickr to wordpress. This is out standing and will definitely increase the number of posts coming from me. I’ve recently updated and rearranged my flickr account for the first time in MONTHS and have been spending a lot of time there.

My wife and I have begun to get a lot more serious about photography and have been spending a lot of time out and about taking pictures. On Sunday, Angela Kleis joined us (although she forgot her batteries) and it was so much fun that we’ll be doing it again next Sunday- with cool fun gear

The main star of the experimentation will be a Rolleiflex camera from 1951-1954ish. It’s old, manual, and cool looking.

She also picked up a Holga for us so we could be like all the artdc cool kids (Angela will be bringing hers Sunday as well). That means there will be three of us running around shooting everything we see. Im seriously looking forward to it!

As for this picture, it was taken (as all of mine are lately) with a Digital Rebel XT. It’s been set to take B&W (ie, the B&W isnt a later Photoshop conversion) and a virtual “red filter”.

The scene of the picture was the recent DC anti war protest (we were just there to document, not participate). It really remind me of something from Mirrormask or Neverwhere….

Just comments on a previous coworker’s paper that he’s writing on tuning ArcSight. It’s a bit spewy and unedited (and will go to the other blog as a less stream-of-consciousness bit when I start it shortly), but I thought I’d pass the time until a write another art entry (photography is fun!) with it anyway:

What seems to be missing is commentary on the how and why of acting on the information that goes through the ESM – beyond just how the tools to perform those actions work.

By way of example, look at these specific quotes:
1. Normalization also includes translating the severity scales used by the different devices into ArcSight’s “Agent Severity” scale.

2. ArcSight connectors also assign each event to a set of categories (that is, it assigns a category tuple) using six fields derived from the fields included in the events collected by the connectors. These categories are designed to group like events from unlike devices, from two different IDSs for example, say, from ISS and Cisco.

Why does ArcSight do this? What does it mean to my correlation rules? Can I, algorithmically ahead of time, guarantee that the system will “think” about every event I want it to? With almost every single correlation methodology Ive seen – especially including ArcSight’s default methodology – the answer is a resounding “NO”. This means that you (formally) have no idea where your bits are at any point, whether they’ve been aggregated, why or why not, what transformations or decisions ArcSight has made about them, etc.

This methodology failure means that you cannot go back and do formal analysis on an incident that has passed through ArcSight without the original raw events and significant manual labor except by sheer luck (and thats not formal).

Read that statement above again, it’s important!

Basically, tuning the correlation engine (ArcSight) should never be approached from an “I need to get rid of stuff” – pure data reduction – standpoint. You will, probably, ultimately achieve reduction but thats an effect of the effort, not it’s actual goal. What you are doing, rather, is defining your environment (in a very literal sense).

These definitions (filters in ArcSight) then allow you to programmatically create an ontology within your system which defines your information classes, what their properties are, and how they relate to each other. That ontology exists as a combination of your basic filters and your core rules.

Once you know what your classes are, you can then write rules to define what kind of transformation (comparison, aggregation, filter, pass to another rule, send to active channel) ArcSight performs on your events.

Once these basic rules are written, you can then write higher level rules to express your intentions logically: “Show me when any perimeter firewall exceeds its normal state by a factor thats unusual across the enterprise firewalls”.

In that statement, you have to have “Firewalls” defined, what a Perimeter Firewall is, what your enterprise is, what kind of traffic values and ranges firewalls can expect, what your average enterprise data rate is for firewalls, and a host of other things. Unless you have formally created these things in ArcSight’s rule/filter system and can reuse you cant hope to create a scalable correlation engine – youll lose track of what the system is doing and will have to spend time / effort manually retracing how ArcSight got from point A to B and you lose the precision/accuracy of machine correlation in favor of manual correlation under pressure.

Once all of that is in place, you can use create rule classes: Groups of rules that organize and group events, rules that compare them to each other to say something smart about them, and then rules that either present the new events to analysts, send them back for additional correlation, or drop them completely.

I hope Im making some sense here :)

I would highly suggest checking out this URL: http://en.wikipedia.org/wiki/Ontology_(computer_science)

and:

http://en.wikipedia.org/wiki/Enterprise_service_bus

Ontologies are excruciatingly important to understand if youre doing ESM correlation (not that theyre commonly understood, but trust me on this)

Enterprise Service Bus’s (in Service Oriented Architectures) have a lot of the same requirements and features as ArcSight/ESM’s and are a good model to look at for what ArcSight’s role is in the context of security devices.

So I feel a bit bad now about completely wiping out my 13,000+ square meters of Second Life territory, but it had to be done. I’m in a groove with work and I’m running a 10k (6 miles) in the Pentagon City area with Angela in 2 months. Those two things alone need all of my concentration. Maybe early next year I’ll get back to virtual worlds – I was really getting to an interesting place by using it as a starting place for 2D art finished in real life – but we’ll see how it goes.

If you’ve been watching my flickr stream, you’ll have noticed that I haven’t abandoned art all together. Rather, the focus for the past several weeks has been on taking local photographs and using them as the base layer for further work. Ive been very much interested in manipulating point of view post-shot. This has manifested itself in a series of photographs where I’ve layered multiple level settings and saturations on top of each other to bring different components of the image into highlight (the raw shapes, primary tones and colors, etc.) all while keeping the shadow of the original image around beneath to hint at what the eyes would see without the manipulations. I don’t kid myself that I’m experienced here, but I’ve enjoyed looking at my own output.

I also opened the new blog (infoage.wordpress.com) but there is nothing there. I had much better names in mind for the blog, but the wordpress site had some short term memory issues at the time and my names got lost in the ether beyond. Oh well.

There is also some camera coolness going on here as well – I’ll be moving into the world of DSLR soon. My wife just ordered herself a new Canon 40D!!!! (should arrive tomorrow), so I’ll be inheriting the Digital Rebel XT. This means we get to share the L series lenses I got her for Christmas last year :)

Lastly, as if anyone cares, I really made some progress with my architecture modeling efforts at work today. It’s all about Michelangelo’s quote about finding the structure in the stone and not forcing something that’s not there.

For various reasons, I’ve decided to close the SintixErr Gallery in Second Life. Not least of these was the fact that it was costing me $200/month to maintain (including the streaming media accounts). Work has taken on some new angles and that aspect of my life is finally waking from a long slumber and requires my undivided attantion. Correlation, Ontologies, Semantics, Enterprise Architectures, National Security, Terrorist Watchlists, SOA, Enterprise Service Buses, oh my.

I gave a talk at the DHS Security conference in Baltimore recently about Policy Driven Enterprise Security Architecture and was blown away by how few people understand how much the theory driving EA is going to impact the whole connected human race over the next few decades. They typically regard it as a morass of empty paperwork instead of an attempt to solve the fundamental problems we face as we move from the Data to the actual Information Age.

So, I’ll be starting a new blog about these topics in the next day or so. Check back for more info on that later.

In the mean time, check out this fantastically on-point book, “Emergent Information Technologies and Enabling Policies for Counter-Terrorism” from IEEE Press Series on Computational Intelligence. It’s amazing how similar the Security Event Correlation, Enterprise Architecture, and Counter-Terrorism information theory problem domains are.

Also, if you want some dated and dry but still relevant reading straight from the US government, try this (I found it in the above book): http://www.gao.gov/new.items/d03322.pdf

It essentially links EA directly to National Security matters.

Follow me on Twitter

My Art / Misc. Photo Stream

phoenixhike - 4

phoenixhike - 3

phoenixhike - 2

phoenixhike - 1

phoenixhike - 7

More Photos