You are currently browsing the tag archive for the ‘technology’ tag.

Some friends of mine were recently speaking on a cyber security panel at a non-computer-geek conference. While they got a higher than expected number of attendees, it was still lower than they would have liked. While watching some of the other panelists crash, burn, and then bury themselves at the center of the earth, they came up with a list of pointers for making cyber security talks more palatable based on specific failures they saw (whether humorous or serious). They were off-the-cuff, but I thought they make up a good list. This is part 1. Comments? Thoughts? Additions? :)

  1. Talking over your audience’s head is mean.  No one cares how smart you are unless you can make them just as smart on your topic in 20 minutes or less.
  2. Speaking of 20 minutes. Stay on the time clock. Wasting 15 minutes of someone else’s time is presumptuous and rude.
  3. Having a Slide Extravaganza doesn’t make you a good presenter.  Slides are talking points, nothing more. By the 98th slide, your audience will hate you.
  4. Engage. If people opt to read their horoscope on their l33t Droids rather than watching you in person, your presentation sucks.
  5. Tone. If you have a terrible voice, amplifying it on a  microphone is just plain mean. Record yourself ahead of time and listen to it. Adjust accordingly.
  6. Hair Matters.
  7. Thanking everyone for thanking the thank you people gets redundant. Appreciation is one thing – but it’s not the academy awards.    
  8. Pick one point. Maybe two. Not 438. Your audience is not Neo. They will not be able to learn Kung Fu
  9. Relevance. Know the audience and have a backup plan if no one can relate to what you’re talking about. Otherwise, you’re just filling space.
  10. Smile. If it’s supposed to be a joke and you frown, your audience might not get the cue to laugh
  11. If you smile while you make a joke, and the audience still doesn’t laugh, see “know the audience” (or “talking over your audience’s head”).
  12. Look nice. There are enough cave trolls in the audience. Give people something better to look at.
  13. Be a wingman. If one of your colleagues is getting ogled by above-mentioned cave troll – be sure to intervene on her behalf. Especially if the cave troll is of unspecified gender
  14. Don’t let friends sit in the back row and make you laugh unless they’re part of your shtick. Especially on a panel when it’s not your turn.
  15. Bring pillows. If you’re going to put people to sleep, they may as well be comfortable.
        
Advertisements

This is an excerpt from an email I sent to a number of colleagues as we hash out what our various and competing mandates to “secure cyberspace” (in our own domains) actually involve and what we have to do about them.  My position is that, first and foremost, we need a business security architecture to answer those questions, and that’s where this following conversation is leading.  Various recent discussions I’ve been a part of outside of the job – just in the industry – also were on my mind when I wrote this.  There seems to be this belief that “cyber security” is somehow a technical problem, and I couldnt disagree more. (Note: Im speaking generally below and am leaving out detail for the sake of brevity – I dont have time to write a dissertation. :) Note2: The fact that I dont discuss our complete and utter failure to universally define “identity” doesn’t mean I dont believe it significantly impacts our ability to secure “stuff”.)

Security really comes down to the identity of systems and people: Who needs to interact with who, what transactions they can perform between them (and in what direction), how long those transactions may last, how often they may occur, which side of the transaction “owns” the security of the transaction.  Everything you do to secure your domain of control flows from this information.  It allows us to (literally):  place and configure the technical and process security controls which you have described as being a sector need.

If you wanted to describe a “secure” cyber system without this information, your answer necessarily would have to be: “the computer will be in an electromagnetic shielded room with hardware memory that cant be written to, no disk drives, no connection  outside the room, and anyone who uses it must be x-rayed and without clothing to hide plastic or wooden tools.”  That’s obviously silly, but we just said secure – we didnt define exceptions to “secure”.  So how is the middle ground defined?  What does “secure” mean to you? I’d suggest that it means that whatever controls are in place enable you to continue to operate in a way that supports your mission.  In other words, since security is first defined as: “The system should not do anything more or less than I want it to”, you must define what you want it to do. Then, you go through a prioritization effort of what you want it to do so that your budget can support it.

One of the reasons that developing this business context of “what should my systems do” is important to the budget piece is because that information allows us to begin to create attack trees. Attrack trees help us understand the weak points in the system are *that we care about* (not just every and all possible attack). Few of those weak points are immediately apparent or obvious after an informal inspection, so a process is needed.   From attack trees and control placement, we can then prioritize our efforts based on a combination of technical vulnerability, business risk, and available money and work out a budget.  Otherwise you’re just making up numbers and -hoping- your investment pays off.

And, as far as using universal standards in this area, your requirements MAY look like someone else’s, but the exceptions to that become maliciously exploitable, so you still need to validate and manage your environment and business requirements.

Without this information defined, your security controls will have significant gaps in placement, they will be not be effectively auditable for malicious activity, their configurations will not accurately reflect the real security needs of the system, and you will most likely run out of a security budget before you’ve mitigated a significant amount of risk. On the macro level, these really are the areas which hackers and malicious actors exploit with consistently rich returns.

Technical vulnerabilities will always exist, and we do need to maintain awareness of them and processes to try and keep up with them, but these are largely well understood and we still fail (and we fail dramatically) to actually prevent intrusions, data exfilitration, denial of service, etc. – even when we have controls in place.  It’s not for lack of technology, really. We have controls out the wazoo – firewalls, antivirus, IDS, etc.  We’re just not using them rationally.

This business context (or at least the education and tools to develop it) – if you were to look at every company and stakeholder in the sector as being part of the same business (the business of ____) –  is what I hope we in the working can begin to help provide. I really believe what might seem like fuzzy talk without a lot of action will ultimately result in a concrete way forward to reduce or mitigate the risk we all face.

Update: You can now download a Webcam Audio Visualizer based on the one references in this tutorial – and some completely new ones – by clicking HERE

INTRO

So I’ve been making some new art lately that  I think pretty is cool. Back at Artomatic last year, I wrote code that generated a mosaic of one image out of another and make a 6’x6′ photo and wondered if the code was art, since the only thing it did was generate that one mosaic?

At that point, though, it was still static and the question was (to me) relatively easy to answer.

This time, I wanted something more dynamic and interactive. I wanted to further explore the question of whether  or not something that changes every time you see it and which depends on its environment is still “art”.  What I ended up doing is using Apple’s Quartz Composer – a visual media programming language – to create an  “audio visualizer” (sort of like you see in iTunes, Winamp, etc.).  What’s different about this piece, though is that combines live webcam input with live audio input into a pulsating, moving interpretation of the world around the piece.

In some ways, the work can be considered just a “tool”. But, on the other hand – and more importantly, I think – the fact that the ranges of color, proportion, size, placement, and dimension have all been pre-designed by the artist to work cohesively no matter what the environmental input moves it into the realm of “art”.

In this post, I hope use the piece in a way that will give you an example of what it would look like as part of a real live installation and to help explain the ins and outs of my process.

THE BASICS

An easy example of where this would do really well is at a music concert. The artist would point the camera at the band or the audience, and, as it plays, the piece would morph and transform the camera input in time to the music and a projector would display the resulting visuals onto a screen next to the band (or even onto the band itself).  This is just one suggestion, though.  Interesting static displays could also be recorded based on live input to be replayed later. It’s this latter idea that you’ll see represented below (though you might notice my macbook chugging a little bit on the visuals…slightly offbeat. Thats a slow hardware issue :) ):

In that clip, I pointed the webcam at myself and a variety of props (masks, dolls, cats, the laptop, etc) as music plays from the laptop speakers. There was a projector connected to the laptop displaying the resulting transformations onto a screen in real time. A video camera was set up to record the projection as it happened.  My setup isn’t much, but it can be confusing, so take a look below. My laptop with the piece on it, webcam connected to the laptop, projector projecting the piece as it happens, and video camera recording the projection:

Quartz Webcam Audio Visualizer Demo Recording Setup

TUTORIAL/EXPLANATION

As I said earlier, I used Quartz Composer – a free programming language from Apple upon which a lot of Mac OSX depends. Some non-technical artists might be a little bit leery of the term “programming language”, but Quartz is almost designed for artists. It’s drag and drop. Imagine if you could arrange lego’s to make your computer do stuff. Red lego’s did one type of thing, blue did another, green did a third. That’s basically Quartz. There are preset “patches” that do various things: Get input, transform media, output media somehow, etc. You pick your block and it appears on screen. If you want to put webcam input on a sphere, you would: Put a sphere block on the screen, put a video block on the screen, and drag a line from the video to the sphere. It’s as easy as that.  First, I’d suggest you take a look at this short introduction by Apple here:

http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html

Then take a look at the following clip and I’ll walk you through how it works at a hight level:

The code for this is fairly straightforward:

Simple Quartz Composer Webcam Audio VisualizerIn the box labeled “1” on the left, I’ve inserted a “patch” that collects data from a webcam and makes it available to the rest of the “Composition” (as Quartz Programs are called).  On the right side of that patch, you can see a circle labeled “Image”. That means that the patch will send whatever video it gets from the webcam to any other patch that can receive images. (Circles on the right side indicate things that the patch can SEND to others. Circles on the left indicate information that the patch can RECEIVE from others.)

The patch labeled “3”, next to the video patch, is designed to resize any images it receives. I have a slow macbook, but my webcam is high definition so I need to make the resolution of the webcam lower (the pictures smaller) so my laptop can better handle it. It receives the video input from the video patch, resizes it, and then makes the newly resized video available to any patch that needs it.  (You can set the resize values through other patches by connecting them to the “Resize Pixels Wide” and “Resize Pixels High” circles, but in this case they are static – 640×480. To set static values, just double-click the circle you want to set and type in the value you want it to have.)

In the patch labeled “4”, we do something similar, but this time I have it change the contrast of the video feed. I didn’t really need to, but I wanted to see how it looked. The Color Control patch then makes the newly contrasted image available to any other patch that needs it.

On the far right, the webcam output is finally displayed via patch “8”. Here I used a patch that draws a sphere on the screen and textured the sphere (covered the sphere with an image) with the webcam feed after it has been resized and contrast added.

So now we have a sphere with the webcam video on it, but it’s not doing anything “in time” with the music being played.

What I decided to do was to change the diameter of the sphere based on the music as well as the color tint of the sphere.

If you look at patch “2” on the left, you’ll notice 14 circles on the right side of it. These represent different (frequency) bands of the music coming in from the microphone. This would be the same type of thing if you were to be using an equalizer on your stereo (It’s actually split into 16 bands in Quartz, I just only use 14).  Each of those circles has a constantly changing value (from 0.0000 – 1.0000) based on the microphone input. Music with lots of bass, for example, would have a lot of high numbers in the first few bands and low numbers in the last few bands).  We use these bands to change the sphere diameter and color.

I chose to use a midrange frequency band to control the size of the sphere because that’s constantly changing, no matter whether the music is bass heavy or tinny.  You can see a line going from the 6th circle down in patch “2” drawn to the “Initial Value” circle of patch “5”.  Patch “5” is a math patch to perform simple arithmetic operations on values it gets and output the results. All I’m going here is making sure my sphere doesn’t get smaller than a certain size.  Since the audio splitter is sending me values from 0.000 – 1.000, I could conceivably have a diameter of 0. So, I use the math patch to add enough to that value that my sphere will always take up about a 25th of the screen, at its smallest.  Patch “5” then sends that value to the diameter input of the sphere patch (#8) we discussed earlier.

It’s these kinds of small decisions that, when compounded on one another, add up to visualizations with specific aesthetic feelings and contribute to the ultimate success or failure of the piece.

Another aspect of controlling the feel of your piece is color.  In patch 6, you see three values from the audio splitter go in, but only one come out.  The three values I used as the initial seeds for “Red”, “Green”, and “Blue” values.  Patch “6” takes those values and converts them into an RGB color value.  However, notice that patch “6” has three “Color” circles on the right, but only one gets used? That’s because I designed that patch to take in one set of Red, Green, and Blue values based on the music, but mix those values into three -different- colors. So as the music changes, those three colors all change in sync and at the same time and by roughly the same amount, but they’re still different colors. That lets me ad

d variety to the piece and allows me, as the artist, to kind of create a dynamic “palette” to chose from that will always be different, but still keep constant color relationships. This contributes to a cohesive and consistent feel to the piece.  A detailed explanation of how I do that is out of the scope of this post, but you can see the code below and take some guesses if you like:

colormanagerjpg-ready

And that’s pretty much that. We have a sphere that displays webcam input and which changes size and color according to the music playing nearby. But that’s really not all that interesting is it? What if we added a few more spheres? What if we used all three of the colors from patch “6”? What if those spheres all moved in time to DIFFERENT bands of the music?

The code might look something like this:

multiballs2jpgready

And the resulting output looks something like this:

Yeah I know the visuals are sortof silly and the song cheesy, but the music’s beat is easy to see and there just isnt that much in my apartment to put on webcam that I havent already.

Also, take a look at 55 seconds through about 1:05. The visualization goes a bit crazy. See the white box on top? You cant see in the video but that box lets me enter input parameters on the fly to affect how the visualization responds. This is the VJ aspect.  For these visualizations, Ive only enabled 2: How fast/big the visual components get and how fast/slow they get small.  In that 10 second segment, Im jacking them up a lot.

What about the original video? What does that code look like? See below.  It’s a litle bit more complicated, but essentially the same thing.  Instead of 16 spheres, I use a rotating 3D cube and a particle fountain (squares spurt out of a specific location like out of a fountain).  In addition to just color and size, the music playing nearby also affects location, rotation, minimum size, speed of the particles, and a number of other visual elements:

myvizjpg-ready

At some point (as soon as I figure out the Cocoa), Ill upload the visualizer here as a Mac OSX application for download.

SUMMARY

So, what do you think? Is this art? If not, what is it? Just something that looks cool? In my mind, artistic vision and aesthetics are a huge component of making “multimedia” “new technology” art, no matter how big a component the technology is.  Without some sort of understanding of what you are visually trying to communicate, it’s only by chance that you’ll end up with something that looks good.  But, even beyond that, I found that I had to think pretty far ahead and understand my medium in order to create something that would look consistent AND visually pleasing no matter what environment it was in and no matter what it was reacting to. It was like writing the rules to create an infinite number of abstract paintings that would always look like they were yours.

Also, figuring out what to put in the webcam view when and at what distance is an important part. When Im paying attention (as in the first video), it adds a whole new dimension. When I dont care and point it at anything (as in the demo videos), the whole thing becomes a bit more throwaway.

We leave Thursday. That’s 4 and a half days from now. Doug and Nguyet are already in San Francisco and the next time we’ll see them is at the Hong Kong airport right before the last leg of our trip to Vietnam. We’ll be there for three weeks and, while that’s not a lot of time for tried-and-true-backpacking-scenesters, it’s a lot for us on a complete whim (and on unpaid leave).  Still, we’re looking at it as a test-run for a longer trip through the area.

Now that it’s so close to departure time, I’ve been taking a close look at what I’ve bought or selected for the trip and what I actually want to carry. It’s kind of interesting how much “technology” there is for something as old and simple as backpacking. I’d like to think I travel minimally (and I usually only take a single school-sized backpack on ANY trip), but the geek in me (and the corporate consumer) couldn’t resist getting “feature packed” gear.  From a $299 ASUS Eee linux laptop to a kindle to “medicating runners socks” to zip off pants etc, there was some actual bit or technological feature in almost everything I decided to bring that made me decide to bring it. Could I have just brought t-shirts? Sure. But I needed lightweight fast drying wicking shirts :P

Click the pic below to go to Flickr. Each item in the pic is labeled with what it is, if youre interested:

I’ll post more trip details tomorrow – like where exactly we’re going for sure and what we’d like to try and see.

So I recently bought a ton of film strip gear off of Craigslist. Do you all remember this stuff from elementary school? Or if you’re older, high school? They’re basically like slide presentations, except the images arent ever cut from the strip. You insert the strip in a projector or personal viewer and play either a tape or a record for a sound track. When you hear a BEEP on the sound track, you flip to the next image on the strip. 

I always thought they were dumb in school, but I did want to make my own at the time and they’ve been on my mind a lot lately for whatever reason. So, I was pretty thrilled when someone on artdc pointed out a craigslist ad the librarian at Queen Anne school in Upper Marlboro had put out: 4 projectors, 3 personal viewers, and 60 strip presentations for $100. Holy Cow!

Anyway, I got all this gear delivered to work (it takes up…an entire…cuber….) last week and have slowly been hauling it home and playing with it. I’ve found I want to explore three potential uses for it: 

1. Cutting up and reusing the material from the film strips in other art as light-driven collage material

2. Making an actual film strip in the old style they have with the simple lettering and exagerated imagery and doing a projection show of some sort

3. Using the projectors and gear as part of photo still lifes.

One of these three is obviously easier than the others, so I’ve started out taking pictures of the projectors and strips (Paivi also has been photoing some of the images projected).  I put up a few of the recent shots on flicker and one of them made the DCist’s photo of the day:

 

Some of the other shots are here:

In a stylized world where taste is often found below deck, bound and gagged, you sometimes wonder “why bother?”. In a place where social pornography is the breakfast of champions, you don’t often run into anything of consequence. What’s below the surface, after all, except more surface? Certainly nothing special.

Tonight, however, I had the good fortune to be given a tour of something particularly special in a place just like that.

This evening, during one of my brief visits to Second Life (opening an island takes a lot of planning – not much time to socialize), I asked my good friend Eshi Otawara how the opening of her collaborative project, Parsec, had gone Saturday night. Apparently it had gone quite well and she almost immediately offered to teleport me over to the installation area. After briefly tweaking my headphones and mic (which I had been warned were required!) and a couple of other technical difficulties, I was whisked over to a dark room with a couple of other individuals.

This, apparently, was a waiting area of sorts while everyone got themselves in order for the experience. Eshi handed me some animations and told me to activate them. Seven people were normally required to “operate” Parsec, I was told, but we were going to make do with 3-4 and the animations were a critical component of the piece.

Finally, we eventually all touched the grey teleport sphere and were taken up to the feature presentation. At this point I still wasn’t sure what it was about, other than there was some interactive tie in between voice, music, and visual imagery.

We found ourselves standing on a transparent floor inside of a giant white sphere, the inside of which was textured in a way that reminded me of hundreds of CD’s. Around us were seven black balls, each with a unique pattern of dots on them. Eshi essentially then turned us loose and just told us to…talk. So we did. Not sure, at first, of what was expected of us (what DO you say when someone asks you to just ‘talk’?) we wandered around vocalizing somewhat arbitrarily. What we found was that, as we spoke the balls moved. As the balls moved, we heard the sounds of instruments.

What we were experiencing was the first installation in Second Life where the environment responded to the sound of a voice. Each person in the sphere was linked to one of the black spheres around them. As an individual spoke, a certain behavior by the sphere – and thus a certain set of sounds – was triggered depending on how your voice sounded at the time. There were 10 (or 16? I dont remember the exact number) of “ranges” that each person could trigger from his or her sphere.

The sum effect is that, as 7 or more people have a conversation in the installation, the environment reacts visually and audibly and creates a multi-sensory symphony written just for those people in those moments in time. The visuals were minimalistic, at that point, but effective. (And they got better, I found out later!)

Another particularly interesting facet of Parsec is that there is a piece of it (pictured below in an image from their Flickr pool) which can only be unlocked through the unguided collaboration of the participants! As you “play” the Parsec instrument/exhibit with others, you apparently might find that there are patterns or connections embedded and that, if you speak in cooperation, this new visual component is revealed and you find yourself immersed in something akin to a starbursting eye of horace.

parsec.jpg

As an artist, I’m intrigued by this cooperation required to complete the artwork. People have to figure out the problem and then work together to solve it. Rather than just being something built with the mathematics of music and aesthetics in mind, a human element and the human mind if required to make it “work” completely. For all of the traditional art out there with NO connection to the human condition, it’s cool to see a virtual one that manages instead to stay true to (what I think is) one of the primary roles of art in society – exploring ourselves.

For those naysayers who get visibly -angry- when they found out people spend time in Second Life and that there’s nothing there “to do”, this kind of art not only unequivocally proves that not only are there things “to do” that you don’t find anywhere else, but also that the it has been and is continuing to evolve as an art medium in its own right.

Congrats to the creators of Parsec for creating such a cool contribution to art and technology:

Concept, Music and Sound by Dizzy Banjo
Virtual Architecture by Eshi Otawara
Scripting by Chase Marellan

More info and a video can be found here: http://eshiotawara.wordpress.com/2008/01/19/41/

Eshi, thank you so much for the on-the-spot tour. It was fun to hear your voice for the first time and I thoroughly enjoyed listening to you in the role of a tour guide! I also am still smiling at the thought of you, alone, standing in Parsec singing to the machine.

Follow me on Twitter

My Art / Misc. Photo Stream