MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

Self generating patches....tips and ideas ?
MUFF WIGGLER Forum Index -> Modular Synth General Discussion Goto page Previous  1, 2, 3 ... 27, 28, 29, 30  Next [all]
Author Self generating patches....tips and ideas ?
gis_sweden
cptnal wrote:
Here's a patch inspired by (for which read "bears no relation to") Rob Hordijk's flowchart for the Rungler:


When you personal household robot sounds like this it's time for service.
Lovely. It's peanut butter jelly time!
cptnal
gis_sweden wrote:
cptnal wrote:
Here's a patch inspired by (for which read "bears no relation to") Rob Hordijk's flowchart for the Rungler:


When you personal household robot sounds like this it's time for service.
Lovely. It's peanut butter jelly time!


Kind words, gratefully received. Guinness ftw!
oberdada
I just did a radio show about David Tudor and revisited his music. I don't know to what extent it has been discussed here, but surely his work is a direct precursor to modular self generating patches, although he chose to interact with the system when he felt the need to do so.

There are some schematics here:
http://www.davidtudor.org/
In particular I find the neural synthesis project intriguing. I think there was an attempt to follow it up, but what happened?
gis_sweden
oberdada wrote:
I just did a radio show about David Tudor and revisited his music.

Link?
gis_sweden
In very little time I recorded three patches today. Or should I say one patch three recordings. I use basically the same path for all three sounds. They are mildly generative... What I modulate is the amplitude and a filter. On the first track, in this list, I also modulate the decay of the sounds. This time I use digital effects (delay and reverb)! The patches are built around A-160 clock divider. A-160 does mathematical division not musical. I use two oscillators and a ring modulator (and some filter resonance and white noise) as sound sources. Simple patch. A-160 spits out triggers to envelopes. I modulate the envelopes. Some delay and reverb. Thats is hihi

https://freesound.org/people/gis_sweden/sounds/487349/
https://freesound.org/people/gis_sweden/sounds/487270/
https://freesound.org/people/gis_sweden/sounds/487241/
Noodle Twister
Nice sounds, three very different results from one patch. SlayerBadger!
oberdada
gis_sweden wrote:
oberdada wrote:
I just did a radio show about David Tudor and revisited his music.

Link?


https://radionova.no/programmer/sortkanal
and click on 30.09.2019, which will be available for a month.
gis_sweden
oberdada wrote:
https://radionova.no/programmer/sortkanal
and click on 30.09.2019, which will be available for a month.

Thanks for the link. I enjoyed the program. Inspiring and educational.
What comes to my mind it, we have an idea of how neural networks SHALL sound? Neural networks doesn't have a sound.
Like we have an idea of how space sounds...
Comparing Tudors Neural Synthesis work with what comes out of cellF (https://www.youtube.com/watch?v=1G-vk5QWRsg)
cptnal
I use a neural network in all my stuff. I keep it in my head. Mr. Green

But seriously, is it a useful metaphor for describing a method of composition? I've yet to hear a description of how it could work (which of course doesn't mean there isn't one - I'm just not aware of it yet).
Pelsea
Neural nets are learning systems. They are currently very hot in AI circles for things like facial recognition. You can teach a neural net the rules of harmony by showing it a lot of music— https://pdfs.semanticscholar.org/12bc/bc4e3410eb3b7e5a531c5d2da9b66b2b 7ae1.pdf

Once the net is educated, it can jam with you a la band-in-a-box. This sort of thing involves a lot of number crunching, so I doubt it will turn up as a module anytime soon. The output side of a net is often built with fuzzy logic, which is lean enough to run on a PIC or Arduino. I use FL to harmonize things using hand made rule sets (skipping the whole database association part). I’ve published that work, so for all I know my code is in some of the harmony modules that are out there.
cptnal
Indeed, my thoughts are "what could this tell us that's interesting in this context?" Is it...

a) Just buzzword bingo?
b) Musically interesting, but in a different context (like the functioning of a module)?
c) Musically interesting in a patching context?

If it's c, how would this work? The way I understand neural networks and machine learning is it learns from a provided dataset, compares inputs with what it's learned, and makes a decision based on some algorithms. In the modular world what do we have as datasets? Sequences? Something else? How would we take in inputs? Comparators, envelope followers, logic...? Algorithms we probably wouldn't have any problem with.

Anyway, just thinking out loud... hmmm.....
oberdada
gis_sweden wrote:

Comparing Tudors Neural Synthesis work with what comes out of cellF


Actually I think cellF and other things by nonlinear circuits are much closer in spirit to Tudor than the applications Pelsea mentions where there is a learning phase and a well defined task to achieve.

Neural nets don't have to be large when used for experimental purposes. Sprott has come up with a four-dimensional neural net that is chaotic. The fun begins when you apply feedback between the neurons, which I think is avoided in most variants of learning networks.
oberdada
cptnal wrote:

a) Just buzzword bingo?
b) Musically interesting, but in a different context (like the functioning of a module)?
c) Musically interesting in a patching context?

If it's c, how would this work? The way I understand neural networks and machine learning is it learns from a provided dataset, compares inputs with what it's learned, and makes a decision based on some algorithms. In the modular world what do we have as datasets? Sequences? Something else? How would we take in inputs? Comparators, envelope followers, logic...? Algorithms we probably wouldn't have any problem with.


I suppose a neural net module would be feasible if you do the learning offline on a computer and upload the set of weights and biases to the module. The neural network architecture might be fixed once and for all if that makes it easier. Then you could build things like a cowbell recognizer: feed it audio, and it outputs a gate whenever it hears the cowbell.

I'm sure there must be other applications even though the cowbell recognizer is the only one I can come up with. But in general, I think the input typically would be one or more audio signals, or perhaps a bunch of cv signals, and the output might just be a gate signal which classifies all input into two kinds. When neural nets are used as classifiers there has to be a complexity reduction and a whole lot of information loss between the input and the output. This seems to be the way it usually works in the buzzword bingo setting. But of course the network could output cv or audio as well.
gis_sweden
Tudor uses a chip that emulates neuron cells, like them in our brains (https://davidtudor.org/Articles/warthman.html). CellF use actual living neurons. This is not machine learning. Correct me if I’m wrong. It’s just a way of controlling/creating feedback, generating gates and cv. Tudor uses a computed but the principle must be the same.
A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.
cptnal
gis_sweden wrote:
A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.


For sure, but a machine learning patch would be cooler. cool

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...
colb
gis_sweden wrote:
...Feed it with a sequence and see how it slowly replicates it and then develop the sequence.


To get anything worthwhile you would need to 'feed' it with maybe 100000 (ideally many more than that) different sequences, each ranked in some way to define its suitability
colb
cptnal wrote:
gis_sweden wrote:
A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.


For sure, but a machine learning patch would be cooler. cool

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...


For 'learning' to make sense as a concept, you need a goal, and then you need a system complex enough that that goal can be encoded within the system...

-------------
Another way of looking at it might be that a multiple feedback generative patch already is a learning algorithm. It often starts off sounding like shit, then as we search for the elusive sweet-spot by tweaking it we are 'teaching' it. It's still unpredictable, but now it's output is closer to the goal due to us optimising its 'weights'. It has 'learned' how to make sounds we like better than the ones it started with...

Of course that's not really machine learning - it's machine+human learning. For Machine Learning we would need a way to encode the 'sounds we like better' part into the patch, and have the patch tweak itself to find that goal. I think that's maybe a bit harder to achieve smile
gis_sweden
Okay forget learning. Here is a boring sound... Neuron patch (?)... Inspired by NLC Squid Axon (I have saved some hp for that one!) I have patched up a simple 2 stage ASR with 2 S/H. Most of the time it plays its atonal melody, but when the patch is stimulated (by a Sloth LFO) feedback opens up. I can adjust the sensitivity in different ways (so I can simulate the intake of drugs...) Drunk Banana The sound from the OSCs goes though VCFs with some resonance. The VCFs are controlled by the same Sloth LFO. My "neuron patch"... No hands during recording. My favourite part is when nothing happens - for almost 1 min?! But you can hear the filter working via the VCFs. The CV must be in some strange area.

https://freesound.org/people/gis_sweden/sounds/487670/
cptnal
colb wrote:
cptnal wrote:
gis_sweden wrote:
A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.


For sure, but a machine learning patch would be cooler. cool

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...


For 'learning' to make sense as a concept, you need a goal, and then you need a system complex enough that that goal can be encoded within the system...

-------------
Another way of looking at it might be that a multiple feedback generative patch already is a learning algorithm. It often starts off sounding like shit, then as we search for the elusive sweet-spot by tweaking it we are 'teaching' it. It's still unpredictable, but now it's output is closer to the goal due to us optimising its 'weights'. It has 'learned' how to make sounds we like better than the ones it started with...

Of course that's not really machine learning - it's machine+human learning. For Machine Learning we would need a way to encode the 'sounds we like better' part into the patch, and have the patch tweak itself to find that goal. I think that's maybe a bit harder to achieve smile


Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out. Whether it has a more particular meaning in the machine context I don't know...

And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises. The patch I just pulled down was a series of one godawful screeching noise after another. Not a thing I would have chosen to patch, but I enjoyed listening to it play out, and I didn't attempt to make it into something else.

Or am I just weird... zombie
cptnal
gis_sweden wrote:
Okay forget learning. Here is a boring sound... Neuron patch (?)... Inspired by NLC Squid Axon (I have saved some hp for that one!) I have patched up a simple 2 stage ASR with 2 S/H. Most of the time it plays its atonal melody, but when the patch is stimulated (by a Sloth LFO) feedback opens up. I can adjust the sensitivity in different ways (so I can simulate the intake of drugs...) Drunk Banana The sound from the OSCs goes though VCFs with some resonance. The VCFs are controlled by the same Sloth LFO. My "neuron patch"... No hands during recording. My favourite part is when nothing happens - for almost 1 min?! But you can hear the filter working via the VCFs. The CV must be in some strange area.

https://freesound.org/people/gis_sweden/sounds/487670/


Nice! I reckon it's much more fun trying to patch up what a strange module does than actually buying the module. Mr. Green
gis_sweden
cptnal wrote:
Nice! I reckon it's much more fun trying to patch up what a strange module does than actually buying the module. Mr. Green

You are right. But still I'm locking forward to my squid axon. Is an ASR module "cheating" too? Squid axon is an ASR - with chaotic feedback.

cptnal wrote:
That's the appeal of generative for me - the surprises.

Agree! Like the path/sound above. The two tone melody changes. The nothingness for almost a minute. I can stand back and think, "I didn't do that!".

I like the "neuron" analogy (without being an expert at all!!!) . I sketch on a 2 "neuron" patch. Each neuron has internal feedback, they also influence each other - cross feedback. The small box in the cell body is a "regulator". Sets how sensitive the neuron is. The big box is "a sound generating something".
cptnal
hmmm.....

Big boxes are obviously VCOs, small boxes are mixers. But what are "constant stimulation" and "irritation"...? CV sources maybe?
gis_sweden
cptnal wrote:
Big boxes are obviously VCOs, small boxes are mixers. But what are "constant stimulation" and "irritation"...? CV sources maybe?


lol
The answer is ... CV? Always CV, in some form...
"constant stimulation" is some sort of CV-sequence. "Irritation" an LFO or maybe a gate. The feedback consist of audio-feedback.
I wasn't thinking VCO, more an almost self-oscillating VCF. Fell free to fill the boxes as you like hihi
colb
cptnal wrote:

Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out.

'Pleasure' and 'finding things out' both sound like goals to me!
Quote:


And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises.


That's was my point - things we didn't expect, but like - we tune the 'weights' of the generative patch, and ideally end up with something we like (respond positively to) - that doesn't mean it's something we though we would like. That's why I likened it to a learning algorithm - because the combination of wiggler and patch learns something via the process that would otherwise not have been learned. And the tuning process is similar in some ways to how an artificial neural neural network tunes itself - multiple iterations of tweaking weights and comparing results against a goal (in our case - "I find that noise compelling" or "I like that" or "cool sounds man")
cptnal
colb wrote:
cptnal wrote:

Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out.

'Pleasure' and 'finding things out' both sound like goals to me!
Quote:


And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises.


That's was my point - things we didn't expect, but like - we tune the 'weights' of the generative patch, and ideally end up with something we like (respond positively to) - that doesn't mean it's something we though we would like. That's why I likened it to a learning algorithm - because the combination of wiggler and patch learns something via the process that would otherwise not have been learned. And the tuning process is similar in some ways to how an artificial neural neural network tunes itself - multiple iterations of tweaking weights and comparing results against a goal (in our case - "I find that noise compelling" or "I like that" or "cool sounds man")


Meditate on this I shall. thumbs up
MUFF WIGGLER Forum Index -> Modular Synth General Discussion Goto page Previous  1, 2, 3 ... 27, 28, 29, 30  Next [all]
Page 28 of 30
Powered by phpBB © phpBB Group