MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

Video and audio effects.
MUFF WIGGLER Forum Index -> Video Synthesis  
Author Video and audio effects.
Arturo00
Is it possible to use conventional audio effects like delays, reverbs, flanger/phasers, ring modulators, or even distortion to minipulate a video signal. Obviously the circuit would need to be built or modified to handle video signals. But I'm curious to know if this sort of thing has been done or if it's even possible.

If so, what sort of results would it yield?
lizlarsen
The beauty of a modular system (to use the LZX system as an example) is that you can do whatever you want to the signal without destroying its viewability. So absolutely -- after your video comes in via the Video Sync Generator input, patch it through whatever you can think of! You'll get a lot of blurring, softening and distortion thru audio-rate effects, but one way to mitigate that is to do a mix of the original video signal and the distorted signal (kind of like a wet/dry mix.)

This video demonstrates using a self-resonating audio filter to create a video effect:


There's a thread about using audio modules here, too:
https://www.muffwiggler.com/forum/viewtopic.php?t=34122
daverj
Most unmodified audio modules will blur or smear a video signal. That can be fun up to a point, but is pretty limited. The audio modules on the other hand are fine for control voltages to control video modules.

The signal levels on video modules are smaller voltage and much higher frequency than audio modules, so while it's possible to feed the smaller signals directly through an audio module, there's much more noise than if you run the video through an amplifier/interface module to bring the signals up to +/-5 volts. The LZX needs a converter module in both directions to do that. My upcoming video modules already have +/-5 volt inputs built into all modules, so only need to be converted in one direction. Plus with an adapter cable my video outs can put out +/-2v signals directly. (but still need a converter to get to the full +/-5v)

It might be possible to swap parts in some DIY modules, and possibly in some commercial modules depending on construction methods, to get them to pass higher frequency signals and therefore have less blur.

Since my system works on composite color signals, low frequency audio modules will tend to kill the color part of the signal when blurring it.

Like with the LZX, no matter how much you distort or damage the video signal, it still comes out of my system as stable recordable video.
Arturo00
Thanks guys. But I guess what I was mainly wondering is if anyone, you guys included, has ever used the same principles as audio effects in a module, but designed specifically for video. Sorta like LZX's Multimode filter is a filter designed specifically for video signals and frequencies.

I'll check out that link Lars. The answer to my queation is probably in there somewhere. Also, that video was one of the first demo videos I watched.

Dave, I'm looking forward to see what modules you're coming out with. Your video oscillator looks fucking sweet! And you already have an LZX interface module in the works. I will most definitely be picking up something of yours in the future.
Arturo00
daverj wrote:
It might be possible to swap parts in some DIY modules, and possibly in some commercial modules depending on construction methods, to get them to pass higher frequency signals and therefore have less blur.


I guess this sort of implies that it hasn't happened yet, to your knowledge.
lizlarsen
Ah, I see what you mean. Well, for things like delays, reverbs, chorus, flangers, phase modulators, etc -- those are all time-based effects. Video equivalents would typically involve various video feedback techniques -- and definitely, there's all kinds of ways to add slight delays to a video signal. One of the big ones will be frame-buffer based manipulation, which will come into play with the LZX system in the future. But there's so much you can do with just a feedback camera, a monitor, and a few video modules to do things with the feedback path.
lizlarsen
Another, more direct way to think of this, is you can think of "delay" as an effect, when it relates to analogue images, as relating to "position". For example, if we took an entire video image and moved it to the right by 10-15 pixels, that would equate to something close to a 1 microsecond delay. If we added a "feedback" control to that (like an audio delay has, to get multiple repeats), we'd see fading repetitions of the image streaking across the screen. Doing an analogue delay with a wide range at full video bandwidths would be very difficult, but it can be done using frame buffers (you could think of this as a "digital video delay", I suppose.) Phasers, chorus, reverb, flange, etc. are all going to fit into this same category of effect, as far as what you'd expect to see it "do", visually.

Ring modulator is typically a 4-quadrant analog multiplier, so similar to a VCA. That would be a very useful circuit to have working at video bandwidths, particularly when doing vector rescanning and similar techniques. It's been on our prototyping list.

Distortions are just waveshapers, of various varieties, usually. You can think of the Sandin IP Function Generator as a sort of video distortion. By adding gain to the signal, you can get the whites and black areas to clip.
Arturo00
Ok. So similar time based effects can be achieved using processes and techniques rather then specific pieces of equipment designed to fulfill said tasks. Lars, don't you have a video up of a frame-buffer manipulator prototype? I think it made tracers. That's a very cool effect.
lizlarsen
Yeah, it's weird with analogue video because when you're talking about time-based effects, you're also talking about positioning/geometric-based effects if the time increment you're talking about is shorter than frame rate (30Hz.) Because with analogue video, when things happen faster than frame rate, they become part of the image.

So that's why I say video feedback is the same or similar to a "video delay". If you have the feedback camera aligned with the feedback monitor, but just slightly off... you are creating a "video delay".

So it's not that these effects haven't been created "for video", it's just that the effects in question have different names and contexts when in the video realm. That said, there's still a lot of unexplored territory, too! And even though LZX have our video filter -- there are an innumerable number of ways to approach the design of such a circuit (look at all the audio filters out there).
barto
what about using a microphone? if it was filtered correctly, could it be used as a good modulation source?
lizlarsen
There wouldn't be any difference from using a microphone and any other audio source to modulate video. If you were very good at humming at different frequencies, you could maybe get bars that scrolled at different rates. smile

There's all sorts of info that can be extracted from an audio source though, and used to create video modulation -- amplitude envelope extraction (ie, envelope follower), frequency extraction (bandpass filters followed by envelope followers) or peak extraction (envelope follower into comparator to create a pulse dependent on an amplitude threshold.)
daverj
Arturo, many of the same types of processes do exist in video, but in many cases the concepts are similar but the electronics to do it are very different.

Audio is a continuous time signal. Video is a spacial signal as well as a time signal. But because it's a grid where the pixel to the right happens 70 nanoseconds later, while the pixel below happens hundreds of thousands of nanoseconds later, processing it is very different.

You can have a delay that moves to the right with fairly simple circuitry. A delay that moves the image down requires a long digital delay. And moving the same pixel in the same place to a different frame takes even longer delays. I've built video delays that delay images for as long as 11 seconds, and as short as a pixel.

My frame buffers are time based processors that can store and manipulate many frames, and build up images over time.

Things like audio delays and flangers, even when built with video rate circuitry, only shift or repeat the image to the right. It takes very different circuitry to do the same thing up and down, or frame to frame. But yes, those all do exist.

VCAs in video control contrast instead volume. Filters blur or sharpen the image, or add and remove color. LFOs cause flicker. Audio rate oscillators make horizontal bars. Supersonic oscillators make horizontal bars. Mixing the different frequencies creates shapes. Envelope generators control video modules just like they control audio modules.

Pretty much any audio process that can be applied to video has been at some point in the past. But many more video specific processes have been made that don't relate to audio directly because of the nature of the video signal.
Arturo00
Thanks Dave. That's some good perspective there. I'm so used to audio signals that I totally neglected the spatial aspects of a video signal.
MUFF WIGGLER Forum Index -> Video Synthesis  
Page 1 of 1
Powered by phpBB © phpBB Group