MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

Shortening live sound for monitoring?
MUFF WIGGLER Forum Index -> Production Techniques  
Author Shortening live sound for monitoring?
Spectromat
Hi all,
I am trying to figure out if there might be a way to monitor live sound in a way that cuts out part of the live sound (75% of the sound duration) allowing me to monitor a shortened version of what is playing (25%) in a meaningful way?

The idea stems from thinking about recording slow motion video whilst at the same time creating music/sound/noise that will eventually play back with the slow motion video, but I would like to be able to manipulate the sound devices in time with the camera movement (and over time stay in sync).

Not sure if I need to be thinking about software that in some way slices the sound and plays back using some sort of live granular synthesis, or if I need to create a tempo device that runs four times faster than the sounds being created? Also unsure if there might be any hardware solution that would be needed to pitch shift the live feed?

Any thoughts or ideas would be gratefully received thumbs up
Koekepan
I'm really not clear on your goal here.

Are you trying to cut, say, the tail of a snare during live performance?
Spectromat
Apologies if it’s not very clear, that’s probably because I am not entirely sure anything like this is possible..

I would like to shoot some slow motion video (say 96fps for ease of calculation) whilst creating music as the video is shot, and I have been wondering if there is a way to monitor / record the music at the same time as the video is shot.. but to somehow slow down the live recorded music (slowed by 25% so it will play back with the footage at 24fps) so I can hear and use the sound cues to move the camera in time with the music being created?

Sorry if it sounds rather convoluted, but I have done a whole load of searching and not found any references to help me explain it better..

Hope that makes a little more sense?
Koekepan
OK, so you're shooting something at 96fps with the intention of displaying it at 24fps.

So if you're playing at 240bpm, it will come out as 60bpm.

Let's assume that 60bpm ambient in time with flowing water in a creek for a meditation video is your goal.

Now you want to hear your 240bpm madness so that you can react to it in a timely fashion, as a director of photography?

I'm not about to say that this is impossible, but it seems unnecessarily difficult. Why not shoot footage first and synchronise later? If you have a solid beat to you rmusic, why not just play a metronome? I have the feeling there's an awful lot you're not telling us about this scenario, because as described the constraints sound weird as hell.
Spectromat
Koekepan wrote:
I have the feeling there's an awful lot you're not telling us about this scenario, because as described the constraints sound weird as hell.


The reason for not getting into too much detail, is because I am currently exploring the potential of a situation where sound is recorded live, my background is as a visual artist, where having flexible scenarios can prevent one from missing potential creative avenues for a project's realisation, nothing more sinister than that.. cool

Art often creates these weird constraints, it is true, but sometimes these constrains can help to reveal something interesting, and I am exploring the potential around a moment of experience where visual and audio information are connected.

The scenario of flowing water filmed at 96fps (for 24fps playback) and 60bpm ambient sound played at 240bpm is well observed, and close to some of my recent visual experiments. I am currently exploring less beat driven sounds, with sample based ambient drones created through loop tweaking and granular manipulation and some analogue synth modulation, but maybe pattern based things could work.

Ultimately I am thinking of a camera rig / synthesizer rig that includes both image capture and sound generation / recording gear so that things like focus, zoom and camera movement can be connected to alterations of audio parameters.

Whether the output of this is for live performance or the recreation of specific events where image and audio are captured together for screening later still isn't that clear to me yet, thanks for your current suggestions thumbs up
Koekepan
Then what I would suggest to you, in the absence of a specific use case (or rather, more specific than capturing live sound) would be to be prepared to use pitch shifting so that the sounds aren't all redshifted out of existence, with perhaps a gating effect to trim audio tails where available.
Spectromat
Thanks Koekepan thumbs up

Appreciate any contributions smile
Technologear?
The music video guys will have lots more to say on this topic than most around these parts of the internet. Not sure where that hang out.
I thought of this:
[video]https://youtu.be/QzqOeD7FV2s[/video]
Get your ambient chipmunk on

Edit: sorry I can't embed links properly[/video]
Spectromat

That's a good point..

I initially thought this was going to be down to audio processing (to remove some of the audio signal) I wondered if something like max msp might be able to chop the audio up and feed parts of the chopped up sound into a mix running at a quarter of it's live speed.. but learning the language to code, without knowing if it might work could take a while.

Will check out some video hangouts and ask around cool

(to embed the video remove the s after http;)
MUFF WIGGLER Forum Index -> Production Techniques  
Page 1 of 1
Powered by phpBB © phpBB Group