| br>Hello! This is my obligatory introductory, noob to video synthesis post. I've been lurking for quite a while learning about various systems, and have been poring over video doodles, music videos, etc that you guys have made... truly awesome and inspiring stuff! The sense of collaboration and community I've gleaned here is very welcoming.
Though I have never owned a modular system before, I am a quick study on anything electronic. I got interested in video synthesis at Knobcon 2013 when I saw brownshoesonly doing visuals during the live performances. The next day, he took the time to explain the basic concepts to me and I got really excited, but I was not in a financial position to assemble a system. Today, I'm playing keys in a sort of 60's/70's psych outfit. We've dabbled a bit with projected video, lighting, and VHS tapes, but I want to use this as an excuse to plunge into the world of video synthesis.
My primary goals right now are basic audio visualization and live camera feeds. To start, I would like to have a setup that requires little user interaction, as we're all pretty busy playing our instruments (more advanced audio visualization is my plan down the road!) I have a Visual Cortex in the mail right now.
1. For basic audio visualization, I think we're going to feed a drum mic to a Critter and Guitari Rhythm Scope. I'll need to feed its composite output to a Color TBC to integrate with LZX system.
2. For live camera feeds, I want to feed 4 small form factor security cameras into this Ambery 4-channel PIP video processor, with the output going into the Visual Cortex. I intend to capture video of us band members/our instruments by mounting the cameras to some repurposed flexible mic-stand-mount pop filters. I may or may not use a Composite to Component box on the PIP processor, as monochrome out from the cameras would probably be fine for colorizing and manipulation. I'll probably just use auto channel sequence switching through feeds to start.
3. To integrate these two concepts, I figured I could start with some keying, changing camera sources/PIP modes... I'm also interested to see what kind of unintentional feedback we get from the cameras, with the projected video at the back of the stage. Keep in mind I'm generating these ideas having never touched the Visual Cortex; I trust lots of inspiration and ideas will stem from experimentation!
I'm mostly just throwing this introduction out there to see what kind of recommendations, advice, etc the more experienced folk have... thanks! br> br>