You are currently browsing the tag archive for the ‘artistic’ tag.

As promised in the previous post, here are demo videos of my three new Quartz Composer Webcam Audio Visualizer compositions. I’m being a bit silly in them, but that’s because I dont have an external webcam or anything else more artistic to point it at tonight. In the future, I might do a real non-demo piece of art with one or more of these. No promises, though.  Next post will be about security, though, I swear. :)

Advertisements

I guess once I get going, I keep going for awhile. Recently, I put up some T-shirts for sale which use my art for designs. However, after a few years of showing them, I also wanted to get some of my data and security visualization art available as well and, yesterday, I finally did it.  You can click here to go to the store:

Data Visualization Art Prints

Some of these don’t look quite as surreal or “clean” as other data visualization art, but that’s because I’m very interested in the specific cross-section of usability and “prettiness” in the aesthetics of images: The place where what makes them useful is also what makes them attractive. Finding that line, in my mind, is what makes them “art”.  One could make some really cool looking images out of most semi-structured data, but it would cease to be useful. The ones here retain their function to security and data analysts while, at the same time, being attractive pieces.

If you’re interested in other security visualization information, try secviz.org

Today, after the 8 hour “Industrial Control Systems Security for IT Professionals class”, I wanted to make something pretty. And code. And work on a protocol problem.  I’ve needed to look a little at the new Stimulus bill for work lately, so I thought I’d try and at least say I’d written  Python today, dissect the text of the bill into parsable chunks, then throw it into some visualizations.  I can’t easily capture the interesting avenues of analysis I was pursuing visually (and I dont feel like writing it up), but I did manage to make some kind of pretty pictures. Hopefully someone feels inspired from them and goes down a similar path. (I already have some ideas at further stats I want to parse from the bill to be able to look at it more meaningfully. Perhaps Ill do it this weekend – this was just the first cut at setting it up.)

First, I grabbed the full text of the bill from HERE. Then, I wrote some (stupidly) simple python (again, I’m never sure if it’s -good- python) to parse the bill and turn it into a new file with five columns: Word Number, Word Length, Line Number, Work Position in Line, and the actual Word itself. This essentially turned the bill into a a text file with every word in the bill on its own line (in the order it showed up), but with  machine readable meta-data I could use to visually represent it.

stimulus = open(‘/Users/sintixerr/Documents/stimulus.txt’, ‘r’)
finalfile = open(‘/Users/sintixerr/Documents/sdump.txt’, ‘w’)
linenum=0
wordnum=0
lineposition=0
gstruct=[]
for line in stimulus:

lineposition=0
linenum+=1
word=line.split(‘ ‘)
word=word[:len(word)-1]
for w in word:

lineposition+=1
wordnum+=1
gstruct=str(wordnum)+’\t’+str(linenum)+’\t’+str(lineposition)+’\t’+w.upper()+’\t’+str(len(w))+’\n’
finalfile.write(gstruct)

stimulus.close()
finalfile.close()

Then, I opened up the new tab delimited bill in my visualizer of choice and ran it through a few different ways of representing the bill.

First, the raw text – without any real manipulation – looked cool in and of itself and I noticed some interesting, if obvious in hindsight, features. (I did clean out some obviously bad data first with a little  sed action, but that mostly just involved removing punctuation that caused the same words to show up as different ones. )

Stimulus Bill Visualized in its Entirety

Stimulus Bill Visualized in its Entirety. In this image, the Y axis represents every word (ASCII characters with spaces or carriage returns on either side) in the bill and the X axis represents the Line Numbers those words appeared on.

First, if you look about a fourth of the way from the left, and then again closer to halfway, you see a vertical “break” in the scatterplot where it looks like the density is much lower.  That is probably a major section break in the original document (I honestly haven’t actually read it in english yet).  That possibility is supported by the second observation which is: Even in human written documents, you can still discern protocol visually. (Again, obvious, but it’s neat.).  If you look at the bottom third of the image, it looks nothing like the top 2/3.  Much more curving paths, fewer horizontal lines, less density, etc.  If you look at those “words”, they’re all document structure words (like section numbers, headings, etc.). …and monetary figures.  If you look closely, there appear at first glance to be two or more incompatible or unrelated document content structures there.  Above that section is where the more obvious “free form” english exists in the set.

Moving on from there, I wanted to see if I could get anything intellectually or aesthetically interesting by using a scatterplot to draw out the shape of the bill.  To do that, I plotted “Line Number” on the X axis and “Position of Word in the Line” on the Y axis.  (Actually, originally those two were swapped, but the resulting image “looked better” when I swapped the X and Y).   I colored everything by Word on a categorical scale so things wouldn’t blend together too much and then ratcheted up the size scale to reduce empty space. I was looking for a visual representation of the literal structure of the document, not an analysis tool or I wouldn’t have done that last bit.

The resulting image looks like this:

stimulusbill1

Shape of the Stimulus Bill on its side. If you were to compress the actual text of the whole bill into one page and rotate it 90 degrees counter-clockwise, it would probably have the same shape as this, only with text.

Finally, I was curious if I could do a little manual clustering work. I tried to narrow down the words into the data set to those that might have some intrinsic meaning in the context of the stimulus bill. This means I got rid of prepositions, repeated filler words, etc.  I did this by knocking out every word under 4 letters and all of those over 17 chars (over 17 were all artifacts of turning the bill into something parsable, not actual real words).  Then I created a bar chart of words and sorted it by how often words appeared in the document and removed about the bottom 70% of words. I made an assumption (which is almost definitely so broad that the data will have to be sliced again a different way for meaningful analysis) that any words that weren’t repeated that often just werent a real “theme” to the people writing the document. Interestingly, things like “security” and “health” and some others were left in the set, but “cyber” was removed. Hmm. :)  After that, I went manually through the remaining set of words and removed those that seemed to not have any cluster value (both through intuition as well as by visually watching the scatterplot of the whole set while I highlighted individual words t see what lit up.) Finally, and lastly, since I originally wanted to make visually interesting things more than do real analysis, I used some blurring, resharpening, and layering to give a more cloudy, vibrant feeling to it.  Interestingly, that created “clouds” around many of the clusters and made them easier to make out for analysis.  That supports my whole theory that what the eyes and mind like to look at is what the mind and eyes are better able to make intelligent use of.

The final result is here:

Stimulus Bill Subject Groupings

Words of substance that might be indicative of topics or subjects within the bill. X axis, like the first picture, is line number and Y axis is Word.

Update: You can now download a Webcam Audio Visualizer based on the one references in this tutorial – and some completely new ones – by clicking HERE

INTRO

So I’ve been making some new art lately that  I think pretty is cool. Back at Artomatic last year, I wrote code that generated a mosaic of one image out of another and make a 6’x6′ photo and wondered if the code was art, since the only thing it did was generate that one mosaic?

At that point, though, it was still static and the question was (to me) relatively easy to answer.

This time, I wanted something more dynamic and interactive. I wanted to further explore the question of whether  or not something that changes every time you see it and which depends on its environment is still “art”.  What I ended up doing is using Apple’s Quartz Composer – a visual media programming language – to create an  “audio visualizer” (sort of like you see in iTunes, Winamp, etc.).  What’s different about this piece, though is that combines live webcam input with live audio input into a pulsating, moving interpretation of the world around the piece.

In some ways, the work can be considered just a “tool”. But, on the other hand – and more importantly, I think – the fact that the ranges of color, proportion, size, placement, and dimension have all been pre-designed by the artist to work cohesively no matter what the environmental input moves it into the realm of “art”.

In this post, I hope use the piece in a way that will give you an example of what it would look like as part of a real live installation and to help explain the ins and outs of my process.

THE BASICS

An easy example of where this would do really well is at a music concert. The artist would point the camera at the band or the audience, and, as it plays, the piece would morph and transform the camera input in time to the music and a projector would display the resulting visuals onto a screen next to the band (or even onto the band itself).  This is just one suggestion, though.  Interesting static displays could also be recorded based on live input to be replayed later. It’s this latter idea that you’ll see represented below (though you might notice my macbook chugging a little bit on the visuals…slightly offbeat. Thats a slow hardware issue :) ):

In that clip, I pointed the webcam at myself and a variety of props (masks, dolls, cats, the laptop, etc) as music plays from the laptop speakers. There was a projector connected to the laptop displaying the resulting transformations onto a screen in real time. A video camera was set up to record the projection as it happened.  My setup isn’t much, but it can be confusing, so take a look below. My laptop with the piece on it, webcam connected to the laptop, projector projecting the piece as it happens, and video camera recording the projection:

Quartz Webcam Audio Visualizer Demo Recording Setup

TUTORIAL/EXPLANATION

As I said earlier, I used Quartz Composer – a free programming language from Apple upon which a lot of Mac OSX depends. Some non-technical artists might be a little bit leery of the term “programming language”, but Quartz is almost designed for artists. It’s drag and drop. Imagine if you could arrange lego’s to make your computer do stuff. Red lego’s did one type of thing, blue did another, green did a third. That’s basically Quartz. There are preset “patches” that do various things: Get input, transform media, output media somehow, etc. You pick your block and it appears on screen. If you want to put webcam input on a sphere, you would: Put a sphere block on the screen, put a video block on the screen, and drag a line from the video to the sphere. It’s as easy as that.  First, I’d suggest you take a look at this short introduction by Apple here:

http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html

Then take a look at the following clip and I’ll walk you through how it works at a hight level:

The code for this is fairly straightforward:

Simple Quartz Composer Webcam Audio VisualizerIn the box labeled “1” on the left, I’ve inserted a “patch” that collects data from a webcam and makes it available to the rest of the “Composition” (as Quartz Programs are called).  On the right side of that patch, you can see a circle labeled “Image”. That means that the patch will send whatever video it gets from the webcam to any other patch that can receive images. (Circles on the right side indicate things that the patch can SEND to others. Circles on the left indicate information that the patch can RECEIVE from others.)

The patch labeled “3”, next to the video patch, is designed to resize any images it receives. I have a slow macbook, but my webcam is high definition so I need to make the resolution of the webcam lower (the pictures smaller) so my laptop can better handle it. It receives the video input from the video patch, resizes it, and then makes the newly resized video available to any patch that needs it.  (You can set the resize values through other patches by connecting them to the “Resize Pixels Wide” and “Resize Pixels High” circles, but in this case they are static – 640×480. To set static values, just double-click the circle you want to set and type in the value you want it to have.)

In the patch labeled “4”, we do something similar, but this time I have it change the contrast of the video feed. I didn’t really need to, but I wanted to see how it looked. The Color Control patch then makes the newly contrasted image available to any other patch that needs it.

On the far right, the webcam output is finally displayed via patch “8”. Here I used a patch that draws a sphere on the screen and textured the sphere (covered the sphere with an image) with the webcam feed after it has been resized and contrast added.

So now we have a sphere with the webcam video on it, but it’s not doing anything “in time” with the music being played.

What I decided to do was to change the diameter of the sphere based on the music as well as the color tint of the sphere.

If you look at patch “2” on the left, you’ll notice 14 circles on the right side of it. These represent different (frequency) bands of the music coming in from the microphone. This would be the same type of thing if you were to be using an equalizer on your stereo (It’s actually split into 16 bands in Quartz, I just only use 14).  Each of those circles has a constantly changing value (from 0.0000 – 1.0000) based on the microphone input. Music with lots of bass, for example, would have a lot of high numbers in the first few bands and low numbers in the last few bands).  We use these bands to change the sphere diameter and color.

I chose to use a midrange frequency band to control the size of the sphere because that’s constantly changing, no matter whether the music is bass heavy or tinny.  You can see a line going from the 6th circle down in patch “2” drawn to the “Initial Value” circle of patch “5”.  Patch “5” is a math patch to perform simple arithmetic operations on values it gets and output the results. All I’m going here is making sure my sphere doesn’t get smaller than a certain size.  Since the audio splitter is sending me values from 0.000 – 1.000, I could conceivably have a diameter of 0. So, I use the math patch to add enough to that value that my sphere will always take up about a 25th of the screen, at its smallest.  Patch “5” then sends that value to the diameter input of the sphere patch (#8) we discussed earlier.

It’s these kinds of small decisions that, when compounded on one another, add up to visualizations with specific aesthetic feelings and contribute to the ultimate success or failure of the piece.

Another aspect of controlling the feel of your piece is color.  In patch 6, you see three values from the audio splitter go in, but only one come out.  The three values I used as the initial seeds for “Red”, “Green”, and “Blue” values.  Patch “6” takes those values and converts them into an RGB color value.  However, notice that patch “6” has three “Color” circles on the right, but only one gets used? That’s because I designed that patch to take in one set of Red, Green, and Blue values based on the music, but mix those values into three -different- colors. So as the music changes, those three colors all change in sync and at the same time and by roughly the same amount, but they’re still different colors. That lets me ad

d variety to the piece and allows me, as the artist, to kind of create a dynamic “palette” to chose from that will always be different, but still keep constant color relationships. This contributes to a cohesive and consistent feel to the piece.  A detailed explanation of how I do that is out of the scope of this post, but you can see the code below and take some guesses if you like:

colormanagerjpg-ready

And that’s pretty much that. We have a sphere that displays webcam input and which changes size and color according to the music playing nearby. But that’s really not all that interesting is it? What if we added a few more spheres? What if we used all three of the colors from patch “6”? What if those spheres all moved in time to DIFFERENT bands of the music?

The code might look something like this:

multiballs2jpgready

And the resulting output looks something like this:

Yeah I know the visuals are sortof silly and the song cheesy, but the music’s beat is easy to see and there just isnt that much in my apartment to put on webcam that I havent already.

Also, take a look at 55 seconds through about 1:05. The visualization goes a bit crazy. See the white box on top? You cant see in the video but that box lets me enter input parameters on the fly to affect how the visualization responds. This is the VJ aspect.  For these visualizations, Ive only enabled 2: How fast/big the visual components get and how fast/slow they get small.  In that 10 second segment, Im jacking them up a lot.

What about the original video? What does that code look like? See below.  It’s a litle bit more complicated, but essentially the same thing.  Instead of 16 spheres, I use a rotating 3D cube and a particle fountain (squares spurt out of a specific location like out of a fountain).  In addition to just color and size, the music playing nearby also affects location, rotation, minimum size, speed of the particles, and a number of other visual elements:

myvizjpg-ready

At some point (as soon as I figure out the Cocoa), Ill upload the visualizer here as a Mac OSX application for download.

SUMMARY

So, what do you think? Is this art? If not, what is it? Just something that looks cool? In my mind, artistic vision and aesthetics are a huge component of making “multimedia” “new technology” art, no matter how big a component the technology is.  Without some sort of understanding of what you are visually trying to communicate, it’s only by chance that you’ll end up with something that looks good.  But, even beyond that, I found that I had to think pretty far ahead and understand my medium in order to create something that would look consistent AND visually pleasing no matter what environment it was in and no matter what it was reacting to. It was like writing the rules to create an infinite number of abstract paintings that would always look like they were yours.

Also, figuring out what to put in the webcam view when and at what distance is an important part. When Im paying attention (as in the first video), it adds a whole new dimension. When I dont care and point it at anything (as in the demo videos), the whole thing becomes a bit more throwaway.

Follow me on Twitter

My Art / Misc. Photo Stream