You are currently browsing the category archive for the ‘Process’ category.
If you’ve read some of my recent posts here, you’ll have seen that Im back working on creating data visualization pieces as art. In the process of making these, I was reminded again of the relationship between art and security and its practical implications for enterprise security efforts that literally dictate success of failure. Bear with me as I walk through the art piece first and then arrive at the security observations :)
First, to work, art has to have a solid concept. You might accidentally create a piece that’s appealing on some level if you just throw paint at canvas, but you probably won’t repeat that success often and observers will understand this.
Taking that into the realm of data visualization, you can make all the pretty graphs you like, but unless you do some leg-work ahead of time and massage the data into shape, they’ll be of little use and only may accidentally be visually appealing in a way that let’s you intuitively grok it. (I think this is philosophically similar to some of what Tufte teaches, but I don’t remember for sure.)
For example, if I wanted to (as I did) visually represent the stimulus bill in a meaningful way on screen at once, I could really just use a microscopic font…or turn the whole thing into a jpg and resize it to fit on screen. But what would that accomplish? It would just be mush. We wouldn’t have identified or accounted for inherent structural properties that we needed to keep to preserve order. We also wouldn’t have separated the wheat from the chaff – useless information would hide useful information. And we wouldn’t have manually added linkages between data points that would help us draw meaningful conclusions visually to account for a loss of resolution in individual words.
What would work, instead, is to turn (as I did) the Stimulus Bill into columns of useful information. You could convert the free form english structure of the Bill into a tabular format and add meta data about the text that I wanted to see in the visuals. You could add line numbers, position in sentences, group words by sections of the document and add word counts, etc. All this would show up visually and present a much more useful visualization that would also, because of the new more conscious conceptual structure, be more appealing to look at.
So what does this have to do with security? Everything.
Recently, much has been made of the new SANS CAG control list. Basically, this is a list of “best practice” security measures and controls that, if properly done, will make the most impact in securing organizations. Where’s the problem? The problem is that none of these are new (except WiFi). They’ve all been around longer than I’ve worked in the field (7ish years) and probably much longer than that. Everyone who works in security knows them. Most CTO’s, CIO’s, and CISO’s will probably not be unfamiliar with them. But yet, they’re either not implemented or, more often, they just don’t work.
If these really are best practices (and they are), but yet they’re not working, where’s the disconnect? I think it’s lack of structure. Most organizations do not operate their businesses in a manner that can be secured. There are inherent structural flaws (as in, there isnt any) in the enterprises themselves that conflict with and outright prevent security from happening – just like in art and visualizations. No matter how much effort or money you throw at the problem, cyber/IT/technical security controls will get you nowhere quickly (if anywhere ever) without a properly run and organized business. What failed cyber or IT security really is, ultimately, is a symptom of failed Operational Risk Management.
If you can’t track assests, if you haven’t identified your key data, if you don’t have clear and measurable business objectives for IT and cyber systems, if you don’t have a clear line of sight between the risk of technical failure to business impact, your security controls -will- fail.
Why? Because an organization run without these things will consistently make poor decisions based on incorrect, out of date, or conflicting information. In other words, you have to build break points into the business to be able to check, measure, and change the the organization at key junctures in order to make good risk-based decisions. “Risk-Based decision making” get’s bantered about like “moving forward” and “synergies” – but it’s not an empty phrase and it has real, concrete impacts and prerequisites.
Let’s look at a best-case scenario where everyone wants to do the right thing, but there isn’t an enterprise or business architecture in place. Everyone goes through an evaluation of need and risk, pick the right controls, put them in place. Hunky dory, yeah? Well, what happens when a new line of business is added? Nothing to do with security, right? What if the new line is taking critical data that wasn’t exposed by the other systems and making it public inadvertently? Would you know that? If you need to patch critical systems quickly to prevent a flaw, would you know which ones kept your business running? Would you have documented in an easily accessible manner the fact that your manufacturing systems depended on a feature that the new patch – which works just fine on desktops – disables? Etc. Not to mention that your IDS’s depend on this info, your firewalls, your SEMs, everything. There is relatively little happening on your network that is inherently bad outside of a business context. There are many more (and probably better) examples…but there are two take-home points:
- Everyone with the authority to make changes to your business needs to be aware of the secondary dependencies of those decisions and how they intersect with security and inform others of changes they make
- If you try and do this without managed processes and without maintaing and continuously updating the information about the business in an architecture, you’ll fail. It’s too hard, too expensive, and takes to long to keep doing it from scratch. It’ll never be accurate, timely, relevant, etc.
Business leadership at all levels and in many (most?) organizations simply are making bad decisions that affect security. It’s not that we don’t know, as security professionals, the right things to do. It’s that we can’t express it in terms of business risk and the business leaders typically don’t seem to have the structure built in to affect positive change throughout the organization. Build some good, clean structure with visible break points at critical junctures in your business flow and then security will start to become cheaper, easier, and more effective.
Originally uploaded by sintixerr
On the subject of these “data visualizations as art”, I’ve been trying to better articulate why I think they’re art and how I’m trying to evolve my process.
What it comes down to is that there seems to be two pieces to developing the visualizations:
- Choosing the right structure and things to measure about the text or data…what makes sense to compare to what. How do you reduce the noise and non-dependent variables? Each type of text you’re measuring and each circumstance has different relationships. There is a lot of science to this part, but it’s not completely predicatable. There is art.
- How do you visually best enhance and needle out the important details, contrast between points, etc so that they can be “seen” in the noise that doesnt matter? This is all art. Understanding how color, shape, contrast, etc all work together and how to use all of those to present a dense amount of information without being overwhelming is tricky and depends on the skill of the one creating it…
It’s my belief that playing to what we understand as people’s abilities to process and comprehend aesthetics in art involves exactly the same techniques and takes advantage of the same aspects of peoples brains/senses as good visual data analysis. So, if you’re doing data analysis, you start out figuring out #1, and then move to #2 based on #1.
What I was trying to do with these stimulus images – and the last of my security visualizations – was start out with concepts of what I’d like for #2 (how they would “feel”) and then figure out what I needed to do in #1 (massage the data) to get there…while still remaining true to the underlying information.
Next up (and once I learn more Objective C), I’m going to try and read in the stimulus bill to Quartz Composer and combine my recent interactive/music visualizations with the Bill. We’ll see if that goes anywhere interesting. :)
Also, Artomatic returns to DC this year. I very well may be displaying this stuff there when it comes around. This or the music/webcam visualizations.
Today, after the 8 hour “Industrial Control Systems Security for IT Professionals class”, I wanted to make something pretty. And code. And work on a protocol problem. I’ve needed to look a little at the new Stimulus bill for work lately, so I thought I’d try and at least say I’d written Python today, dissect the text of the bill into parsable chunks, then throw it into some visualizations. I can’t easily capture the interesting avenues of analysis I was pursuing visually (and I dont feel like writing it up), but I did manage to make some kind of pretty pictures. Hopefully someone feels inspired from them and goes down a similar path. (I already have some ideas at further stats I want to parse from the bill to be able to look at it more meaningfully. Perhaps Ill do it this weekend – this was just the first cut at setting it up.)
First, I grabbed the full text of the bill from HERE. Then, I wrote some (stupidly) simple python (again, I’m never sure if it’s -good- python) to parse the bill and turn it into a new file with five columns: Word Number, Word Length, Line Number, Work Position in Line, and the actual Word itself. This essentially turned the bill into a a text file with every word in the bill on its own line (in the order it showed up), but with machine readable meta-data I could use to visually represent it.
stimulus = open(‘/Users/sintixerr/Documents/stimulus.txt’, ‘r’)
finalfile = open(‘/Users/sintixerr/Documents/sdump.txt’, ‘w’)
for line in stimulus:
for w in word:
Then, I opened up the new tab delimited bill in my visualizer of choice and ran it through a few different ways of representing the bill.
First, the raw text – without any real manipulation – looked cool in and of itself and I noticed some interesting, if obvious in hindsight, features. (I did clean out some obviously bad data first with a little sed action, but that mostly just involved removing punctuation that caused the same words to show up as different ones. )
First, if you look about a fourth of the way from the left, and then again closer to halfway, you see a vertical “break” in the scatterplot where it looks like the density is much lower. That is probably a major section break in the original document (I honestly haven’t actually read it in english yet). That possibility is supported by the second observation which is: Even in human written documents, you can still discern protocol visually. (Again, obvious, but it’s neat.). If you look at the bottom third of the image, it looks nothing like the top 2/3. Much more curving paths, fewer horizontal lines, less density, etc. If you look at those “words”, they’re all document structure words (like section numbers, headings, etc.). …and monetary figures. If you look closely, there appear at first glance to be two or more incompatible or unrelated document content structures there. Above that section is where the more obvious “free form” english exists in the set.
Moving on from there, I wanted to see if I could get anything intellectually or aesthetically interesting by using a scatterplot to draw out the shape of the bill. To do that, I plotted “Line Number” on the X axis and “Position of Word in the Line” on the Y axis. (Actually, originally those two were swapped, but the resulting image “looked better” when I swapped the X and Y). I colored everything by Word on a categorical scale so things wouldn’t blend together too much and then ratcheted up the size scale to reduce empty space. I was looking for a visual representation of the literal structure of the document, not an analysis tool or I wouldn’t have done that last bit.
The resulting image looks like this:
Finally, I was curious if I could do a little manual clustering work. I tried to narrow down the words into the data set to those that might have some intrinsic meaning in the context of the stimulus bill. This means I got rid of prepositions, repeated filler words, etc. I did this by knocking out every word under 4 letters and all of those over 17 chars (over 17 were all artifacts of turning the bill into something parsable, not actual real words). Then I created a bar chart of words and sorted it by how often words appeared in the document and removed about the bottom 70% of words. I made an assumption (which is almost definitely so broad that the data will have to be sliced again a different way for meaningful analysis) that any words that weren’t repeated that often just werent a real “theme” to the people writing the document. Interestingly, things like “security” and “health” and some others were left in the set, but “cyber” was removed. Hmm. :) After that, I went manually through the remaining set of words and removed those that seemed to not have any cluster value (both through intuition as well as by visually watching the scatterplot of the whole set while I highlighted individual words t see what lit up.) Finally, and lastly, since I originally wanted to make visually interesting things more than do real analysis, I used some blurring, resharpening, and layering to give a more cloudy, vibrant feeling to it. Interestingly, that created “clouds” around many of the clusters and made them easier to make out for analysis. That supports my whole theory that what the eyes and mind like to look at is what the mind and eyes are better able to make intelligent use of.
The final result is here:
Update: You can now download a Webcam Audio Visualizer based on the one references in this tutorial – and some completely new ones – by clicking HERE
So I’ve been making some new art lately that I think pretty is cool. Back at Artomatic last year, I wrote code that generated a mosaic of one image out of another and make a 6′x6′ photo and wondered if the code was art, since the only thing it did was generate that one mosaic?
At that point, though, it was still static and the question was (to me) relatively easy to answer.
This time, I wanted something more dynamic and interactive. I wanted to further explore the question of whether or not something that changes every time you see it and which depends on its environment is still “art”. What I ended up doing is using Apple’s Quartz Composer – a visual media programming language – to create an “audio visualizer” (sort of like you see in iTunes, Winamp, etc.). What’s different about this piece, though is that combines live webcam input with live audio input into a pulsating, moving interpretation of the world around the piece.
In some ways, the work can be considered just a “tool”. But, on the other hand – and more importantly, I think – the fact that the ranges of color, proportion, size, placement, and dimension have all been pre-designed by the artist to work cohesively no matter what the environmental input moves it into the realm of “art”.
In this post, I hope use the piece in a way that will give you an example of what it would look like as part of a real live installation and to help explain the ins and outs of my process.
An easy example of where this would do really well is at a music concert. The artist would point the camera at the band or the audience, and, as it plays, the piece would morph and transform the camera input in time to the music and a projector would display the resulting visuals onto a screen next to the band (or even onto the band itself). This is just one suggestion, though. Interesting static displays could also be recorded based on live input to be replayed later. It’s this latter idea that you’ll see represented below (though you might notice my macbook chugging a little bit on the visuals…slightly offbeat. Thats a slow hardware issue :) ):
In that clip, I pointed the webcam at myself and a variety of props (masks, dolls, cats, the laptop, etc) as music plays from the laptop speakers. There was a projector connected to the laptop displaying the resulting transformations onto a screen in real time. A video camera was set up to record the projection as it happened. My setup isn’t much, but it can be confusing, so take a look below. My laptop with the piece on it, webcam connected to the laptop, projector projecting the piece as it happens, and video camera recording the projection:
As I said earlier, I used Quartz Composer – a free programming language from Apple upon which a lot of Mac OSX depends. Some non-technical artists might be a little bit leery of the term “programming language”, but Quartz is almost designed for artists. It’s drag and drop. Imagine if you could arrange lego’s to make your computer do stuff. Red lego’s did one type of thing, blue did another, green did a third. That’s basically Quartz. There are preset “patches” that do various things: Get input, transform media, output media somehow, etc. You pick your block and it appears on screen. If you want to put webcam input on a sphere, you would: Put a sphere block on the screen, put a video block on the screen, and drag a line from the video to the sphere. It’s as easy as that. First, I’d suggest you take a look at this short introduction by Apple here:
Then take a look at the following clip and I’ll walk you through how it works at a hight level:
The code for this is fairly straightforward:
In the box labeled “1″ on the left, I’ve inserted a “patch” that collects data from a webcam and makes it available to the rest of the “Composition” (as Quartz Programs are called). On the right side of that patch, you can see a circle labeled “Image”. That means that the patch will send whatever video it gets from the webcam to any other patch that can receive images. (Circles on the right side indicate things that the patch can SEND to others. Circles on the left indicate information that the patch can RECEIVE from others.)
The patch labeled “3″, next to the video patch, is designed to resize any images it receives. I have a slow macbook, but my webcam is high definition so I need to make the resolution of the webcam lower (the pictures smaller) so my laptop can better handle it. It receives the video input from the video patch, resizes it, and then makes the newly resized video available to any patch that needs it. (You can set the resize values through other patches by connecting them to the “Resize Pixels Wide” and “Resize Pixels High” circles, but in this case they are static – 640×480. To set static values, just double-click the circle you want to set and type in the value you want it to have.)
In the patch labeled “4″, we do something similar, but this time I have it change the contrast of the video feed. I didn’t really need to, but I wanted to see how it looked. The Color Control patch then makes the newly contrasted image available to any other patch that needs it.
On the far right, the webcam output is finally displayed via patch “8″. Here I used a patch that draws a sphere on the screen and textured the sphere (covered the sphere with an image) with the webcam feed after it has been resized and contrast added.
So now we have a sphere with the webcam video on it, but it’s not doing anything “in time” with the music being played.
What I decided to do was to change the diameter of the sphere based on the music as well as the color tint of the sphere.
If you look at patch “2″ on the left, you’ll notice 14 circles on the right side of it. These represent different (frequency) bands of the music coming in from the microphone. This would be the same type of thing if you were to be using an equalizer on your stereo (It’s actually split into 16 bands in Quartz, I just only use 14). Each of those circles has a constantly changing value (from 0.0000 – 1.0000) based on the microphone input. Music with lots of bass, for example, would have a lot of high numbers in the first few bands and low numbers in the last few bands). We use these bands to change the sphere diameter and color.
I chose to use a midrange frequency band to control the size of the sphere because that’s constantly changing, no matter whether the music is bass heavy or tinny. You can see a line going from the 6th circle down in patch “2″ drawn to the “Initial Value” circle of patch “5″. Patch “5″ is a math patch to perform simple arithmetic operations on values it gets and output the results. All I’m going here is making sure my sphere doesn’t get smaller than a certain size. Since the audio splitter is sending me values from 0.000 – 1.000, I could conceivably have a diameter of 0. So, I use the math patch to add enough to that value that my sphere will always take up about a 25th of the screen, at its smallest. Patch “5″ then sends that value to the diameter input of the sphere patch (#8) we discussed earlier.
It’s these kinds of small decisions that, when compounded on one another, add up to visualizations with specific aesthetic feelings and contribute to the ultimate success or failure of the piece.
Another aspect of controlling the feel of your piece is color. In patch 6, you see three values from the audio splitter go in, but only one come out. The three values I used as the initial seeds for “Red”, “Green”, and “Blue” values. Patch “6″ takes those values and converts them into an RGB color value. However, notice that patch “6″ has three “Color” circles on the right, but only one gets used? That’s because I designed that patch to take in one set of Red, Green, and Blue values based on the music, but mix those values into three -different- colors. So as the music changes, those three colors all change in sync and at the same time and by roughly the same amount, but they’re still different colors. That lets me ad
d variety to the piece and allows me, as the artist, to kind of create a dynamic “palette” to chose from that will always be different, but still keep constant color relationships. This contributes to a cohesive and consistent feel to the piece. A detailed explanation of how I do that is out of the scope of this post, but you can see the code below and take some guesses if you like:
And that’s pretty much that. We have a sphere that displays webcam input and which changes size and color according to the music playing nearby. But that’s really not all that interesting is it? What if we added a few more spheres? What if we used all three of the colors from patch “6″? What if those spheres all moved in time to DIFFERENT bands of the music?
The code might look something like this:
And the resulting output looks something like this:
Yeah I know the visuals are sortof silly and the song cheesy, but the music’s beat is easy to see and there just isnt that much in my apartment to put on webcam that I havent already.
Also, take a look at 55 seconds through about 1:05. The visualization goes a bit crazy. See the white box on top? You cant see in the video but that box lets me enter input parameters on the fly to affect how the visualization responds. This is the VJ aspect. For these visualizations, Ive only enabled 2: How fast/big the visual components get and how fast/slow they get small. In that 10 second segment, Im jacking them up a lot.
What about the original video? What does that code look like? See below. It’s a litle bit more complicated, but essentially the same thing. Instead of 16 spheres, I use a rotating 3D cube and a particle fountain (squares spurt out of a specific location like out of a fountain). In addition to just color and size, the music playing nearby also affects location, rotation, minimum size, speed of the particles, and a number of other visual elements:
At some point (as soon as I figure out the Cocoa), Ill upload the visualizer here as a Mac OSX application for download.
So, what do you think? Is this art? If not, what is it? Just something that looks cool? In my mind, artistic vision and aesthetics are a huge component of making “multimedia” “new technology” art, no matter how big a component the technology is. Without some sort of understanding of what you are visually trying to communicate, it’s only by chance that you’ll end up with something that looks good. But, even beyond that, I found that I had to think pretty far ahead and understand my medium in order to create something that would look consistent AND visually pleasing no matter what environment it was in and no matter what it was reacting to. It was like writing the rules to create an infinite number of abstract paintings that would always look like they were yours.
Also, figuring out what to put in the webcam view when and at what distance is an important part. When Im paying attention (as in the first video), it adds a whole new dimension. When I dont care and point it at anything (as in the demo videos), the whole thing becomes a bit more throwaway.
I’ve spammed this particular link everwhere else I can think of, but still neglected to post it here on my blog.
Basically, I was approached a few months ago by a senior editor of Symantec’s online magazine “Norton Today” because they were interested in doing a piece on Art and Security. I was approached because of my old work in security data visualization and the fact that’d I’d started to rework and hang the pieces in art shows like Artomatic and My Space on 7th.
Anyway, the interview went really well (in addition to being a lot of fun) and it’s now online at:
(Edit: This link now appears down after a few months. Symantec has republished the article here: http://www.thegeekweekly.com/feature/turning_computer_vis_into_art/index.html )
They used a few older images in their Flash slideshow (My fault – I didnt get them newer images in time). These were the originals we used at NetSec to do analysis and which have been in a number of presentations (and were in the batch I sent to ArcSight as examples when they were still developing Interactive Discovery, iirc).
You can find the “art” versions that I’ve hung up in galleries at the following link:
I’m still interested in working more of these, but have been moving from graphing – which was a necessity of the business at the time – into a broader field of ontological information/concept representation in art.
(This is in addition to my media experimentation with / interest in projection. I think Id like to merge these two tracks together in the future, but havent gotten there yet.)
So I recently bought a ton of film strip gear off of Craigslist. Do you all remember this stuff from elementary school? Or if you’re older, high school? They’re basically like slide presentations, except the images arent ever cut from the strip. You insert the strip in a projector or personal viewer and play either a tape or a record for a sound track. When you hear a BEEP on the sound track, you flip to the next image on the strip.
I always thought they were dumb in school, but I did want to make my own at the time and they’ve been on my mind a lot lately for whatever reason. So, I was pretty thrilled when someone on artdc pointed out a craigslist ad the librarian at Queen Anne school in Upper Marlboro had put out: 4 projectors, 3 personal viewers, and 60 strip presentations for $100. Holy Cow!
Anyway, I got all this gear delivered to work (it takes up…an entire…cuber….) last week and have slowly been hauling it home and playing with it. I’ve found I want to explore three potential uses for it:
1. Cutting up and reusing the material from the film strips in other art as light-driven collage material
2. Making an actual film strip in the old style they have with the simple lettering and exagerated imagery and doing a projection show of some sort
3. Using the projectors and gear as part of photo still lifes.
One of these three is obviously easier than the others, so I’ve started out taking pictures of the projectors and strips (Paivi also has been photoing some of the images projected). I put up a few of the recent shots on flicker and one of them made the DCist’s photo of the day:
Some of the other shots are here: