You are currently browsing the tag archive for the ‘"Quartz Composer"’ tag.
Per previous posts, I am making some free software available here (although it’s somewhat niche): A Mac OS X Distributed Objects server for the Neurosky brain wave reading Mindset and a Quartz Composer plug-in client for the server. (If you have neither OS X nor the Mindset, you might want to wait for a future post where I talk more about how the brain wave art project is coming.)
This post will also serve as a brief introduction to what it would take for you to write your own Cocoa client for the server. But, If you just want the software, you can get it here:
- Server Application (and source code / Xcode Project)
- Quartz Composer Plug-In Client (and source code / Xcode Project)
- To install the client for Quartz Composer, close QC and copy the .plugin file to: “/Library/Graphics/Quartz Composer Plugins”. When you next open QC, you should find it in your Patch Library listed as “MindSetQCClient”. Usage of the patch should be obvious,
- The server shouldn’t need to start first as long as the client periodically checks for a vended object, but when troubleshooting it’s probably a good idea to start the server, then the client.
- The server needs the Thinkgear bundle in same directory as the server app. (I’m not including the Thinkgear bundle, it’s available from the Neurosky website for free as part of their developer stuff.
- Neurosky documentation has instructions for how to figure out what serial port your mindset is on, iirc. The default for the server is the one I use.
- I’ve borrowed so heavily from a hodge-podge of tutorials and examples, that I’m not going to include a license for the code. Use it as you will.
So, onward to the tutorial/implementation details:
Distributed Object Mindset Server and Client
This server is intended to be a little easier to use than some of the connection methods Neurosky provides (at least in my mind). It grabs data from the Mindset and provides it to Cocoa client applications (such as my Quartz Composer plug-in) by using Objective-C / Cocoa’s Distributed Objects interprocess messaging capability.
To access the Mindset data, the client must create an NSConnection to “JacksMindsetServer”. This gives it access to a vended object which supports the following very simple protocol (this protocol will have to be included in your client header file):
Creating the connection to the vended object which uses that protocol is simple and requires only a short bit of code:
NSString *_host = nil;
sharedObject = (id <PassingMindData>)[[NSConnection rootProxyForConnectionWithRegisteredName:@”JacksMindsetServer” host:_host] retain];
You should now have an object called “sharedObject” which allows all of the methods specified by the “PassingMindData” protocol created above and which will pass the data from the mindset server to your code. To do so, the primary method is “getOldestData”. Calling this method will return an array of the oldest line of values from the Mindset and getDataCount returns the number of lines currently queued.
The returned array contains ordered NSNumbers representing each type of value available from the mindset. The array elements can always be accessed in the following order:
- Attention (0)
- Meditation (1)
- Raw (2)
- Delta (3)
- Theta (4)
- Alpha1 (5)
- Alpha2 (6)
- Beta1 (7)
- Beta2 (8)
- Gamma (9)
- Gamma2 (10)
- SignalQuality (11)
The client is left to access these elements as it pleases from the NSArray object returned by getOldestData. The server also relies on the client to remove the original data from the server as soon as it grabs it by calling “removeOldestData” on “sharedObject”. (If the client does not call this, there is no auto-cleanup by the server until it’s stopped or exits and the client will not be able to access new data.)
If multiple lines of data are queued, getOldestData and removeOldestData should be executed repeatedly. A simple example would be:
if ([sharedObject getDataCount] > 0)
mindDataLine = [NSArray arrayWithArray:[sharedObject getOldestData]];
[self setOutputAttention:[[mindDataLine objectAtIndex:0] doubleValue]];
That’s really it. How to write a server is out of the scope of this post, but Neurosky has some great documentation and have provided examples from which I have –heavily– borrowed.
Let me know if you have questions or need further explanation. I’m going to continue to work on the art project with this stuff and will post more about that later.
All, I’ll be giving a quick (5 minute) introduction to using Neurosky’s Mindset API to do cool stuff with your brainwaves – like making art while you sleep :) – on 02/23/10 @7:30pm as part of HacDC’s Lightning Talks (featuring 12 speakers for 5 minutes each). For the introduction, I’ll be using the simple Objective-C server and custom written Quartz Composer plug-in client to display a visualization that response to both your brainwaves and ambient noise/music together. Come out and see!
Check out the example proof-of-code video I did below (a longer post to come tomorrow):
As promised in the previous post, here are demo videos of my three new Quartz Composer Webcam Audio Visualizer compositions. I’m being a bit silly in them, but that’s because I dont have an external webcam or anything else more artistic to point it at tonight. In the future, I might do a real non-demo piece of art with one or more of these. No promises, though. Next post will be about security, though, I swear. :)
Well, the HacDC Hacker’s Lounge event/party got canceled – which was too bad. However, I did write some valuable code and make some pretty cool looking new compositions. The code isn’t ready for release, but I did put up the compositions and they’re available for free download here: https://sintixerr.wordpress.com/quartz-composer-downloads/
I don’t have video for them yet (maaaybe later today), so you’ll just have to try them out for yourself. I actually like all three of these much more than the original.
Remember, OS X / Quartz Composer only.
( Hmm. I guess I should write a viewer for these so you don’t need Quartz. Many projects, little time, but we’ll see… )
EDIT: THIS HAS BEEN CANCELED DUE TO SNOW. Not sure what to do after shmoocon Friday night? Not going to the con but need something to do? Come over to the HacDC Hacker’s Lounge event for a little while (runs 8pm-2am). I’ve been putting some fun NEW interactive Quartz video projections together for the event (link goes to early older work – need to show up to see newer stuff) and Daniel Packer will be doing some audio with supercollider. Oh yeah, and I hear there will be booze.
I can’t tell you if there will be 10 people or 100 there, but if you take a chance and show up, that’s 1 closer to 100 :)
EDIT: I have some newer, better webcam audio visualizers and some utility patches available now. Click Here: https://sintixerr.wordpress.com/quartz-composer-downloads/
For all of you who have asked for this, I’ve made my Artomatic Quartz Composer based webcam audio visualizer available as a free download.(Keep in mind, this is only for Mac OS X users – Quartz isn’t portable).
You can download it here: http://jackwhitsitt.com/Artomatic09-final-whitsitt.zip
(Im calling it “WAVIQ” for short…Webcam Audio Visualizer In Quartz”…since it needs some sort of a name and I dont feel that creative about it.)
A quick overview:
The composition has two inputs – the webcam and an audio source. If you have a built in webcam, it will default to that. Likewise, if you have a built in mic (most laptops do), the composition will default to using that as your audio source. You can change these by going into the patch inspector for the Video and Audio patches and selecting “settings”. (In the case of the audi, double-click the macro patch “Audio Source” and then click on “Audio Input” to get there).
The only other settings you’ll be interested in are the Increasing Scale and Decreasing Scale parameters found in the Audio Input patch. These affect how fast the values for movement, color, etc. get bigger and how fast they get smaller. This will affect how the composition responds to different music. Also, keep in mind that in the audio settings of OS X itself, you can change the mic sensitivity. This will affect how the composition responds as well.
You can also find a basic tutorial to get you started on tweaking this in the links below.
Thats it. Drop me a line with any questions and have fun with it. If you do end up using it, I’d love to hear about it.
- Tutorial I wrote explaining the basics of how this works:
- Stop-Motion Video Example of how I’m using it at Artomatic:
- Screen-shots of my Artomatic Art Installation: