Video Helmet #2
so i finally posted some pictures of the degree show piece that Andy and i created.
i've been getting some links and inquiries and probably should have coordinated a more thorough description of it to post at the same time.
i will get on that, but in case you think that i've abandoned this blog i'll just give a brief description of the project. more will show up in the next couple of days.
we used a custom built Jitter patch, a piece of software built using Max/MSP + Jitter, to control two cameras, a couple of microphones, and some prerecorded video and audio. this was running on a Mac Mini [ PowerPC version ] with a video helmet [ with video goggles and speakers inside ] as the output. the input, aside from the cameras and mics, are a series of triggers inside the beam that manipulate the input combinations and order. we designed the software to mix the prerecorded audio/video with the live feeds based on where the participant was standing or walking.
some additional notes: my laptop was running Apple Remote Desktop administrator software and the Mac Mini was allowing me to connect to it and control it by connecting through a Belkin wireless router that was hidden in the museum's reception desk. this setup gave us the ability to modify the patch, change the prerecorded source material, record video and audio from the cameras and microphones, and generally touble shoot the entire setup without pulling out a ladder and attaching a keyboard/mouse/monitor to the headless brain concealed in the beam. power was turned off at night so an Automator Workflow was designed to start and stop the application and startup and shutdown the computer each day.
i've been getting some links and inquiries and probably should have coordinated a more thorough description of it to post at the same time.
i will get on that, but in case you think that i've abandoned this blog i'll just give a brief description of the project. more will show up in the next couple of days.
we used a custom built Jitter patch, a piece of software built using Max/MSP + Jitter, to control two cameras, a couple of microphones, and some prerecorded video and audio. this was running on a Mac Mini [ PowerPC version ] with a video helmet [ with video goggles and speakers inside ] as the output. the input, aside from the cameras and mics, are a series of triggers inside the beam that manipulate the input combinations and order. we designed the software to mix the prerecorded audio/video with the live feeds based on where the participant was standing or walking.
some additional notes: my laptop was running Apple Remote Desktop administrator software and the Mac Mini was allowing me to connect to it and control it by connecting through a Belkin wireless router that was hidden in the museum's reception desk. this setup gave us the ability to modify the patch, change the prerecorded source material, record video and audio from the cameras and microphones, and generally touble shoot the entire setup without pulling out a ladder and attaching a keyboard/mouse/monitor to the headless brain concealed in the beam. power was turned off at night so an Automator Workflow was designed to start and stop the application and startup and shutdown the computer each day.