You are currently browsing the category archive for the ‘interactive’ category.
Longer, more detailed post to follow – with free code and everything – but I wanted to post a video of art being made with my brainwaves:
In this demo (which is a significant step further than my last), my project selects between a series of images, merges them, moves them, and adds various visual effects based only on input from my brain waves (as measured by a Neurosky Mindset). All images – both drawings and photos – were made by me. Depending on when I run this, the images selected and how they’re merged vary significantly. In this case, only a small subset were selected. Other times, there is a wider variety. It’s important to note that often, this has created pairings and mergings that are fantastically cool looking. The Next step, creating a self portrait video of me sleeping with a curved screen over top of me projecting what my mind does with this while I sleep.
EDIT: THIS HAS BEEN CANCELED DUE TO SNOW. Not sure what to do after shmoocon Friday night? Not going to the con but need something to do? Come over to the HacDC Hacker’s Lounge event for a little while (runs 8pm-2am). I’ve been putting some fun NEW interactive Quartz video projections together for the event (link goes to early older work – need to show up to see newer stuff) and Daniel Packer will be doing some audio with supercollider. Oh yeah, and I hear there will be booze.
I can’t tell you if there will be 10 people or 100 there, but if you take a chance and show up, that’s 1 closer to 100 :)
EDIT: I have some newer, better webcam audio visualizers and some utility patches available now. Click Here: http://sintixerr.wordpress.com/quartz-composer-downloads/
For all of you who have asked for this, I’ve made my Artomatic Quartz Composer based webcam audio visualizer available as a free download.(Keep in mind, this is only for Mac OS X users – Quartz isn’t portable).
You can download it here: http://jackwhitsitt.com/Artomatic09-final-whitsitt.zip
(Im calling it “WAVIQ” for short…Webcam Audio Visualizer In Quartz”…since it needs some sort of a name and I dont feel that creative about it.)
A quick overview:
The composition has two inputs – the webcam and an audio source. If you have a built in webcam, it will default to that. Likewise, if you have a built in mic (most laptops do), the composition will default to using that as your audio source. You can change these by going into the patch inspector for the Video and Audio patches and selecting “settings”. (In the case of the audi, double-click the macro patch “Audio Source” and then click on “Audio Input” to get there).
The only other settings you’ll be interested in are the Increasing Scale and Decreasing Scale parameters found in the Audio Input patch. These affect how fast the values for movement, color, etc. get bigger and how fast they get smaller. This will affect how the composition responds to different music. Also, keep in mind that in the audio settings of OS X itself, you can change the mic sensitivity. This will affect how the composition responds as well.
You can also find a basic tutorial to get you started on tweaking this in the links below.
Thats it. Drop me a line with any questions and have fun with it. If you do end up using it, I’d love to hear about it.
- Tutorial I wrote explaining the basics of how this works:
- Stop-Motion Video Example of how I’m using it at Artomatic:
- Screen-shots of my Artomatic Art Installation:
Update: You can now download a Webcam Audio Visualizer based on the one references in this tutorial – and some completely new ones – by clicking HERE
So I’ve been making some new art lately that I think pretty is cool. Back at Artomatic last year, I wrote code that generated a mosaic of one image out of another and make a 6′x6′ photo and wondered if the code was art, since the only thing it did was generate that one mosaic?
At that point, though, it was still static and the question was (to me) relatively easy to answer.
This time, I wanted something more dynamic and interactive. I wanted to further explore the question of whether or not something that changes every time you see it and which depends on its environment is still “art”. What I ended up doing is using Apple’s Quartz Composer – a visual media programming language – to create an “audio visualizer” (sort of like you see in iTunes, Winamp, etc.). What’s different about this piece, though is that combines live webcam input with live audio input into a pulsating, moving interpretation of the world around the piece.
In some ways, the work can be considered just a “tool”. But, on the other hand – and more importantly, I think – the fact that the ranges of color, proportion, size, placement, and dimension have all been pre-designed by the artist to work cohesively no matter what the environmental input moves it into the realm of “art”.
In this post, I hope use the piece in a way that will give you an example of what it would look like as part of a real live installation and to help explain the ins and outs of my process.
An easy example of where this would do really well is at a music concert. The artist would point the camera at the band or the audience, and, as it plays, the piece would morph and transform the camera input in time to the music and a projector would display the resulting visuals onto a screen next to the band (or even onto the band itself). This is just one suggestion, though. Interesting static displays could also be recorded based on live input to be replayed later. It’s this latter idea that you’ll see represented below (though you might notice my macbook chugging a little bit on the visuals…slightly offbeat. Thats a slow hardware issue :) ):
In that clip, I pointed the webcam at myself and a variety of props (masks, dolls, cats, the laptop, etc) as music plays from the laptop speakers. There was a projector connected to the laptop displaying the resulting transformations onto a screen in real time. A video camera was set up to record the projection as it happened. My setup isn’t much, but it can be confusing, so take a look below. My laptop with the piece on it, webcam connected to the laptop, projector projecting the piece as it happens, and video camera recording the projection:
As I said earlier, I used Quartz Composer – a free programming language from Apple upon which a lot of Mac OSX depends. Some non-technical artists might be a little bit leery of the term “programming language”, but Quartz is almost designed for artists. It’s drag and drop. Imagine if you could arrange lego’s to make your computer do stuff. Red lego’s did one type of thing, blue did another, green did a third. That’s basically Quartz. There are preset “patches” that do various things: Get input, transform media, output media somehow, etc. You pick your block and it appears on screen. If you want to put webcam input on a sphere, you would: Put a sphere block on the screen, put a video block on the screen, and drag a line from the video to the sphere. It’s as easy as that. First, I’d suggest you take a look at this short introduction by Apple here:
Then take a look at the following clip and I’ll walk you through how it works at a hight level:
The code for this is fairly straightforward:
In the box labeled “1″ on the left, I’ve inserted a “patch” that collects data from a webcam and makes it available to the rest of the “Composition” (as Quartz Programs are called). On the right side of that patch, you can see a circle labeled “Image”. That means that the patch will send whatever video it gets from the webcam to any other patch that can receive images. (Circles on the right side indicate things that the patch can SEND to others. Circles on the left indicate information that the patch can RECEIVE from others.)
The patch labeled “3″, next to the video patch, is designed to resize any images it receives. I have a slow macbook, but my webcam is high definition so I need to make the resolution of the webcam lower (the pictures smaller) so my laptop can better handle it. It receives the video input from the video patch, resizes it, and then makes the newly resized video available to any patch that needs it. (You can set the resize values through other patches by connecting them to the “Resize Pixels Wide” and “Resize Pixels High” circles, but in this case they are static – 640×480. To set static values, just double-click the circle you want to set and type in the value you want it to have.)
In the patch labeled “4″, we do something similar, but this time I have it change the contrast of the video feed. I didn’t really need to, but I wanted to see how it looked. The Color Control patch then makes the newly contrasted image available to any other patch that needs it.
On the far right, the webcam output is finally displayed via patch “8″. Here I used a patch that draws a sphere on the screen and textured the sphere (covered the sphere with an image) with the webcam feed after it has been resized and contrast added.
So now we have a sphere with the webcam video on it, but it’s not doing anything “in time” with the music being played.
What I decided to do was to change the diameter of the sphere based on the music as well as the color tint of the sphere.
If you look at patch “2″ on the left, you’ll notice 14 circles on the right side of it. These represent different (frequency) bands of the music coming in from the microphone. This would be the same type of thing if you were to be using an equalizer on your stereo (It’s actually split into 16 bands in Quartz, I just only use 14). Each of those circles has a constantly changing value (from 0.0000 – 1.0000) based on the microphone input. Music with lots of bass, for example, would have a lot of high numbers in the first few bands and low numbers in the last few bands). We use these bands to change the sphere diameter and color.
I chose to use a midrange frequency band to control the size of the sphere because that’s constantly changing, no matter whether the music is bass heavy or tinny. You can see a line going from the 6th circle down in patch “2″ drawn to the “Initial Value” circle of patch “5″. Patch “5″ is a math patch to perform simple arithmetic operations on values it gets and output the results. All I’m going here is making sure my sphere doesn’t get smaller than a certain size. Since the audio splitter is sending me values from 0.000 – 1.000, I could conceivably have a diameter of 0. So, I use the math patch to add enough to that value that my sphere will always take up about a 25th of the screen, at its smallest. Patch “5″ then sends that value to the diameter input of the sphere patch (#8) we discussed earlier.
It’s these kinds of small decisions that, when compounded on one another, add up to visualizations with specific aesthetic feelings and contribute to the ultimate success or failure of the piece.
Another aspect of controlling the feel of your piece is color. In patch 6, you see three values from the audio splitter go in, but only one come out. The three values I used as the initial seeds for “Red”, “Green”, and “Blue” values. Patch “6″ takes those values and converts them into an RGB color value. However, notice that patch “6″ has three “Color” circles on the right, but only one gets used? That’s because I designed that patch to take in one set of Red, Green, and Blue values based on the music, but mix those values into three -different- colors. So as the music changes, those three colors all change in sync and at the same time and by roughly the same amount, but they’re still different colors. That lets me ad
d variety to the piece and allows me, as the artist, to kind of create a dynamic “palette” to chose from that will always be different, but still keep constant color relationships. This contributes to a cohesive and consistent feel to the piece. A detailed explanation of how I do that is out of the scope of this post, but you can see the code below and take some guesses if you like:
And that’s pretty much that. We have a sphere that displays webcam input and which changes size and color according to the music playing nearby. But that’s really not all that interesting is it? What if we added a few more spheres? What if we used all three of the colors from patch “6″? What if those spheres all moved in time to DIFFERENT bands of the music?
The code might look something like this:
And the resulting output looks something like this:
Yeah I know the visuals are sortof silly and the song cheesy, but the music’s beat is easy to see and there just isnt that much in my apartment to put on webcam that I havent already.
Also, take a look at 55 seconds through about 1:05. The visualization goes a bit crazy. See the white box on top? You cant see in the video but that box lets me enter input parameters on the fly to affect how the visualization responds. This is the VJ aspect. For these visualizations, Ive only enabled 2: How fast/big the visual components get and how fast/slow they get small. In that 10 second segment, Im jacking them up a lot.
What about the original video? What does that code look like? See below. It’s a litle bit more complicated, but essentially the same thing. Instead of 16 spheres, I use a rotating 3D cube and a particle fountain (squares spurt out of a specific location like out of a fountain). In addition to just color and size, the music playing nearby also affects location, rotation, minimum size, speed of the particles, and a number of other visual elements:
At some point (as soon as I figure out the Cocoa), Ill upload the visualizer here as a Mac OSX application for download.
So, what do you think? Is this art? If not, what is it? Just something that looks cool? In my mind, artistic vision and aesthetics are a huge component of making “multimedia” “new technology” art, no matter how big a component the technology is. Without some sort of understanding of what you are visually trying to communicate, it’s only by chance that you’ll end up with something that looks good. But, even beyond that, I found that I had to think pretty far ahead and understand my medium in order to create something that would look consistent AND visually pleasing no matter what environment it was in and no matter what it was reacting to. It was like writing the rules to create an infinite number of abstract paintings that would always look like they were yours.
Also, figuring out what to put in the webcam view when and at what distance is an important part. When Im paying attention (as in the first video), it adds a whole new dimension. When I dont care and point it at anything (as in the demo videos), the whole thing becomes a bit more throwaway.
First, I finished the python code I was working on that will allow two -color- images to be merged into one color mosaic. The color transformations it has to make to fit in the smaller picture to the larger one seem to result in some pretty wild effects – I’m digging it. I’ll clean up the code and post it here tomorrow.
As far as social stuff goes: Angela Kleis’s blogger night at Artomatic was pretty cool. I don’t want to post a lot of thoughts on that yet (I will tomorrow), but it did reinforce the fact that a lot of event management will have to be done at the June 6th ArtDC Artist’s tour dinner. Unfortunately, people have short attention spans and the time each artist speaks will have to be managed and expectations set ahead of time. 5 minutes seems to be about the “max”. We’ll have to bring a timer or something. It’s going to be a -really- interesting night, though, and a lot of fun.
More info on the upcoming dinner can be found in this thread:
Pictures of Blogger’s Night can be found here in a set:
Finally, Erin Antognoli took a couple of great shots of my space while I talked about it to what was left of the crowd:
April 28: I’ll be broadcasting an ArtDC.org member’s band “American Sinner” from Artomatic into the gallery from 9:00-10:00pm Saturday night.
May 4: We’ll be broadcasting a number of artists from Artomatic’s Electric Stage into Second Life, including:
(We’ll also try to project the gallery on a wall of the Electric Stage (instead of the digi room), but Im still waiting on permission for that)
May 2: At 7:30pm in the Lapis Auditorium at Artomatic, Im going to be doing a live hands-on presentation focusing specifically on Second Life and Art, but also on the theoretical role of virtual worlds in art (SL is just the most well developed at this point).
The overall talk is geared towards explaining more thouroughly what I’ve only talked about in short clips to people (ie, make it more coherent) and advocating the use of a vastly underutilized and underappreciated art medium and tool in a digital age.
In a larger sense, the art world is ridiculously behind the technological world and needs to catch up soonish now
The (related) points Ill be covering are:
- “How to be there when you’re not” – Second Life as a live event presentation and extension mechanism
- “How to let people walk through your dreams with you” – Second Life as an art medium in its own right and how it can help explore ideas before creating them in real life. Here Ill be referencing some of Rebecca Gordon’s work (directly) and Tim Tate’s (indirectly) as examples (Tim’s dorkbot talk was great and I meant to do this then)
- “Remembering your friends” – Second Life’s impact and role in presenting art and observations on the role of socialization in art
- “Who cares and why?” – Perspectives on how to market your virtual world presence and use it to your advantage using examples of things that worked, didn’t work, and could have been done or done differently
May 5: I’ll be giving tours of the gallery and any AOM art submitted to me for Second Life Display from 7:30 – 8:30pm EST
May 12: (time TBD) Artists and observers will take part in a live group Art Critique of works from Artomatic Artists as well as artists as a part of a special session of Eshi Otawa’s weekly Open Art Critique session held every Sunday night in her gallery, the Luxor. If you’d like to participate as an artist, please send me a jpg of a piece you’d like discussed and plan on attending in Second Life or at Artomatic Saturday Evening. If you’d just like to talk about art but don’t have any to show, please come by as well! Your input is valued
May 18: On the 18th, we have a follk artist from Atlanta performing in Second Life and being displayed in the Digit room. The backdrop to her performance would be art submitted by AOm artists previously. (More detail here coming)
May 19: Sculpture Contest with Voting and live Artomatic bands. This will be a well -advertised and attended event. We’d love to have ppl in the Digiroom participating and voting on the sculptures. We’re giving away almost $100 in prizes to the virtual artists this night. Again, AOM art submitted to me would be the backdrop for this event. (More detail coming)