Self generating patches....tips and ideas ?

Anything modular synth related that is not format specific.

Moderators: Kent, Joe., luketeaford, lisa

Post Reply
User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Sat Sep 28, 2019 2:02 pm

cptnal wrote:Here's a patch inspired by (for which read "bears no relation to") Rob Hordijk's flowchart for the Rungler:
When you personal household robot sounds like this it's time for service.
Lovely. :banana:

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Sat Sep 28, 2019 2:54 pm

gis_sweden wrote:
cptnal wrote:Here's a patch inspired by (for which read "bears no relation to") Rob Hordijk's flowchart for the Rungler:
When you personal household robot sounds like this it's time for service.
Lovely. :banana:
Kind words, gratefully received. :guinness:
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
oberdada
Common Wiggler
Posts: 247
Joined: Tue Nov 01, 2016 12:06 pm

Post by oberdada » Wed Oct 02, 2019 1:14 pm

I just did a radio show about David Tudor and revisited his music. I don't know to what extent it has been discussed here, but surely his work is a direct precursor to modular self generating patches, although he chose to interact with the system when he felt the need to do so.

There are some schematics here:
http://www.davidtudor.org/
In particular I find the neural synthesis project intriguing. I think there was an attempt to follow it up, but what happened?

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Sat Oct 05, 2019 3:22 pm

oberdada wrote:I just did a radio show about David Tudor and revisited his music.
Link?

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Sat Oct 05, 2019 3:24 pm

In very little time I recorded three patches today. Or should I say one patch three recordings. I use basically the same path for all three sounds. They are mildly generative... What I modulate is the amplitude and a filter. On the first track, in this list, I also modulate the decay of the sounds. This time I use digital effects (delay and reverb)! The patches are built around A-160 clock divider. A-160 does mathematical division not musical. I use two oscillators and a ring modulator (and some filter resonance and white noise) as sound sources. Simple patch. A-160 spits out triggers to envelopes. I modulate the envelopes. Some delay and reverb. Thats is :hihi:

https://freesound.org/people/gis_sweden/sounds/487349/
https://freesound.org/people/gis_sweden/sounds/487270/
https://freesound.org/people/gis_sweden/sounds/487241/

User avatar
Noodle Twister
Common Wiggler
Posts: 234
Joined: Wed Jan 02, 2019 10:22 pm
Location: UK

Post by Noodle Twister » Sun Oct 06, 2019 8:00 am

Nice sounds, three very different results from one patch. :sb:

User avatar
oberdada
Common Wiggler
Posts: 247
Joined: Tue Nov 01, 2016 12:06 pm

Post by oberdada » Sun Oct 06, 2019 9:03 am

gis_sweden wrote:
oberdada wrote:I just did a radio show about David Tudor and revisited his music.
Link?
https://radionova.no/programmer/sortkanal
and click on 30.09.2019, which will be available for a month.

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Tue Oct 08, 2019 3:04 am

oberdada wrote:https://radionova.no/programmer/sortkanal
and click on 30.09.2019, which will be available for a month.
Thanks for the link. I enjoyed the program. Inspiring and educational.
What comes to my mind it, we have an idea of how neural networks SHALL sound? Neural networks doesn't have a sound.
Like we have an idea of how space sounds...
Comparing Tudors Neural Synthesis work with what comes out of cellF (

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Tue Oct 08, 2019 6:09 am

I use a neural network in all my stuff. I keep it in my head. :mrgreen:

But seriously, is it a useful metaphor for describing a method of composition? I've yet to hear a description of how it could work (which of course doesn't mean there isn't one - I'm just not aware of it yet).
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
Pelsea
Ultra Wiggler
Posts: 984
Joined: Thu Apr 19, 2018 11:46 am
Location: Santa Cruz CA
Contact:

Post by Pelsea » Tue Oct 08, 2019 11:27 am

Neural nets are learning systems. They are currently very hot in AI circles for things like facial recognition. You can teach a neural net the rules of harmony by showing it a lot of music— https://pdfs.semanticscholar.org/12bc/b ... 2b7ae1.pdf

Once the net is educated, it can jam with you a la band-in-a-box. This sort of thing involves a lot of number crunching, so I doubt it will turn up as a module anytime soon. The output side of a net is often built with fuzzy logic, which is lean enough to run on a PIC or Arduino. I use FL to harmonize things using hand made rule sets (skipping the whole database association part). I’ve published that work, so for all I know my code is in some of the harmony modules that are out there.
Books and tutorials on modular synthesis at http://peterelsea.com
Patch responsibly.
pqe

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Tue Oct 08, 2019 2:07 pm

Indeed, my thoughts are "what could this tell us that's interesting in this context?" Is it...

a) Just buzzword bingo?
b) Musically interesting, but in a different context (like the functioning of a module)?
c) Musically interesting in a patching context?

If it's c, how would this work? The way I understand neural networks and machine learning is it learns from a provided dataset, compares inputs with what it's learned, and makes a decision based on some algorithms. In the modular world what do we have as datasets? Sequences? Something else? How would we take in inputs? Comparators, envelope followers, logic...? Algorithms we probably wouldn't have any problem with.

Anyway, just thinking out loud... :hmm:
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
oberdada
Common Wiggler
Posts: 247
Joined: Tue Nov 01, 2016 12:06 pm

Post by oberdada » Tue Oct 08, 2019 3:00 pm

gis_sweden wrote: Comparing Tudors Neural Synthesis work with what comes out of cellF
Actually I think cellF and other things by nonlinear circuits are much closer in spirit to Tudor than the applications Pelsea mentions where there is a learning phase and a well defined task to achieve.

Neural nets don't have to be large when used for experimental purposes. Sprott has come up with a four-dimensional neural net that is chaotic. The fun begins when you apply feedback between the neurons, which I think is avoided in most variants of learning networks.

User avatar
oberdada
Common Wiggler
Posts: 247
Joined: Tue Nov 01, 2016 12:06 pm

Post by oberdada » Tue Oct 08, 2019 3:34 pm

cptnal wrote: a) Just buzzword bingo?
b) Musically interesting, but in a different context (like the functioning of a module)?
c) Musically interesting in a patching context?

If it's c, how would this work? The way I understand neural networks and machine learning is it learns from a provided dataset, compares inputs with what it's learned, and makes a decision based on some algorithms. In the modular world what do we have as datasets? Sequences? Something else? How would we take in inputs? Comparators, envelope followers, logic...? Algorithms we probably wouldn't have any problem with.
I suppose a neural net module would be feasible if you do the learning offline on a computer and upload the set of weights and biases to the module. The neural network architecture might be fixed once and for all if that makes it easier. Then you could build things like a cowbell recognizer: feed it audio, and it outputs a gate whenever it hears the cowbell.

I'm sure there must be other applications even though the cowbell recognizer is the only one I can come up with. But in general, I think the input typically would be one or more audio signals, or perhaps a bunch of cv signals, and the output might just be a gate signal which classifies all input into two kinds. When neural nets are used as classifiers there has to be a complexity reduction and a whole lot of information loss between the input and the output. This seems to be the way it usually works in the buzzword bingo setting. But of course the network could output cv or audio as well.

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Wed Oct 09, 2019 1:49 am

Tudor uses a chip that emulates neuron cells, like them in our brains (https://davidtudor.org/Articles/warthman.html). CellF use actual living neurons. This is not machine learning. Correct me if I’m wrong. It’s just a way of controlling/creating feedback, generating gates and cv. Tudor uses a computed but the principle must be the same.
A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Wed Oct 09, 2019 2:32 am

gis_sweden wrote:A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.
For sure, but a machine learning patch would be cooler. 8-)

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

colb
Common Wiggler
Posts: 243
Joined: Wed Nov 16, 2016 3:06 pm
Location: Scotland

Post by colb » Wed Oct 09, 2019 10:31 am

gis_sweden wrote:...Feed it with a sequence and see how it slowly replicates it and then develop the sequence.
To get anything worthwhile you would need to 'feed' it with maybe 100000 (ideally many more than that) different sequences, each ranked in some way to define its suitability

colb
Common Wiggler
Posts: 243
Joined: Wed Nov 16, 2016 3:06 pm
Location: Scotland

Post by colb » Wed Oct 09, 2019 10:45 am

cptnal wrote:
gis_sweden wrote:A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.
For sure, but a machine learning patch would be cooler. 8-)

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...
For 'learning' to make sense as a concept, you need a goal, and then you need a system complex enough that that goal can be encoded within the system...

-------------
Another way of looking at it might be that a multiple feedback generative patch already is a learning algorithm. It often starts off sounding like shit, then as we search for the elusive sweet-spot by tweaking it we are 'teaching' it. It's still unpredictable, but now it's output is closer to the goal due to us optimising its 'weights'. It has 'learned' how to make sounds we like better than the ones it started with...

Of course that's not really machine learning - it's machine+human learning. For Machine Learning we would need a way to encode the 'sounds we like better' part into the patch, and have the patch tweak itself to find that goal. I think that's maybe a bit harder to achieve :)

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Wed Oct 09, 2019 2:18 pm

Okay forget learning. Here is a boring sound... Neuron patch (?)... Inspired by NLC Squid Axon (I have saved some hp for that one!) I have patched up a simple 2 stage ASR with 2 S/H. Most of the time it plays its atonal melody, but when the patch is stimulated (by a Sloth LFO) feedback opens up. I can adjust the sensitivity in different ways (so I can simulate the intake of drugs...) :drunkbanana: The sound from the OSCs goes though VCFs with some resonance. The VCFs are controlled by the same Sloth LFO. My "neuron patch"... No hands during recording. My favourite part is when nothing happens - for almost 1 min?! But you can hear the filter working via the VCFs. The CV must be in some strange area.

https://freesound.org/people/gis_sweden/sounds/487670/

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Wed Oct 09, 2019 2:19 pm

colb wrote:
cptnal wrote:
gis_sweden wrote:A “machine learning module” would be cool. Feed it with a sequence and see how it slowly replicates it and then develop the sequence.
For sure, but a machine learning patch would be cooler. 8-)

I'm thinking a sequence with a parallel LFO/sample and hold. Compare S&H to sequence, and if one voltage is higher than the other, do something. The problem would be remembering that decision for the next iteration...
For 'learning' to make sense as a concept, you need a goal, and then you need a system complex enough that that goal can be encoded within the system...

-------------
Another way of looking at it might be that a multiple feedback generative patch already is a learning algorithm. It often starts off sounding like shit, then as we search for the elusive sweet-spot by tweaking it we are 'teaching' it. It's still unpredictable, but now it's output is closer to the goal due to us optimising its 'weights'. It has 'learned' how to make sounds we like better than the ones it started with...

Of course that's not really machine learning - it's machine+human learning. For Machine Learning we would need a way to encode the 'sounds we like better' part into the patch, and have the patch tweak itself to find that goal. I think that's maybe a bit harder to achieve :)
Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out. Whether it has a more particular meaning in the machine context I don't know...

And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises. The patch I just pulled down was a series of one godawful screeching noise after another. Not a thing I would have chosen to patch, but I enjoyed listening to it play out, and I didn't attempt to make it into something else.

Or am I just weird... :zombie:
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Wed Oct 09, 2019 2:25 pm

gis_sweden wrote:Okay forget learning. Here is a boring sound... Neuron patch (?)... Inspired by NLC Squid Axon (I have saved some hp for that one!) I have patched up a simple 2 stage ASR with 2 S/H. Most of the time it plays its atonal melody, but when the patch is stimulated (by a Sloth LFO) feedback opens up. I can adjust the sensitivity in different ways (so I can simulate the intake of drugs...) :drunkbanana: The sound from the OSCs goes though VCFs with some resonance. The VCFs are controlled by the same Sloth LFO. My "neuron patch"... No hands during recording. My favourite part is when nothing happens - for almost 1 min?! But you can hear the filter working via the VCFs. The CV must be in some strange area.

https://freesound.org/people/gis_sweden/sounds/487670/
Nice! I reckon it's much more fun trying to patch up what a strange module does than actually buying the module. :mrgreen:
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Thu Oct 10, 2019 2:25 am

cptnal wrote:Nice! I reckon it's much more fun trying to patch up what a strange module does than actually buying the module. :mrgreen:
You are right. But still I'm locking forward to my squid axon. Is an ASR module "cheating" too? Squid axon is an ASR - with chaotic feedback.
cptnal wrote:That's the appeal of generative for me - the surprises.
Agree! Like the path/sound above. The two tone melody changes. The nothingness for almost a minute. I can stand back and think, "I didn't do that!".

I like the "neuron" analogy (without being an expert at all!!!) . I sketch on a 2 "neuron" patch. Each neuron has internal feedback, they also influence each other - cross feedback. The small box in the cell body is a "regulator". Sets how sensitive the neuron is. The big box is "a sound generating something".
Image

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Thu Oct 10, 2019 3:18 am

:hmm:

Big boxes are obviously VCOs, small boxes are mixers. But what are "constant stimulation" and "irritation"...? CV sources maybe?
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

User avatar
gis_sweden
Veteran Wiggler
Posts: 684
Joined: Mon Dec 07, 2015 12:57 am
Location: Sweden

Post by gis_sweden » Thu Oct 10, 2019 3:41 am

cptnal wrote:Big boxes are obviously VCOs, small boxes are mixers. But what are "constant stimulation" and "irritation"...? CV sources maybe?
Image
:lol:
The answer is ... CV? Always CV, in some form...
"constant stimulation" is some sort of CV-sequence. "Irritation" an LFO or maybe a gate. The feedback consist of audio-feedback.
I wasn't thinking VCO, more an almost self-oscillating VCF. Fell free to fill the boxes as you like :hihi:

colb
Common Wiggler
Posts: 243
Joined: Wed Nov 16, 2016 3:06 pm
Location: Scotland

Post by colb » Thu Oct 10, 2019 12:59 pm

cptnal wrote: Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out.
'Pleasure' and 'finding things out' both sound like goals to me!

And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises.
That's was my point - things we didn't expect, but like - we tune the 'weights' of the generative patch, and ideally end up with something we like (respond positively to) - that doesn't mean it's something we though we would like. That's why I likened it to a learning algorithm - because the combination of wiggler and patch learns something via the process that would otherwise not have been learned. And the tuning process is similar in some ways to how an artificial neural neural network tunes itself - multiple iterations of tweaking weights and comparing results against a goal (in our case - "I find that noise compelling" or "I like that" or "cool sounds man")

User avatar
cptnal
Super Deluxe Wiggler
Posts: 4000
Joined: Wed Jun 14, 2017 2:48 am
Location: People's Republic of Scotland

Post by cptnal » Thu Oct 10, 2019 1:08 pm

colb wrote:
cptnal wrote: Had to check to be sure, but learning doesn't necessarily need a goal. I certainly do it (at least these days) just for the pleasure of finding things out.
'Pleasure' and 'finding things out' both sound like goals to me!

And I get what you're saying with the human-machine thing (kinda what I was driving at with my neural network being in my head). But I don't think it's necessary to limit our output to things we like. Rather, we should be prepared to hear things we didn't realise we might like. That's the appeal of generative for me - the surprises.
That's was my point - things we didn't expect, but like - we tune the 'weights' of the generative patch, and ideally end up with something we like (respond positively to) - that doesn't mean it's something we though we would like. That's why I likened it to a learning algorithm - because the combination of wiggler and patch learns something via the process that would otherwise not have been learned. And the tuning process is similar in some ways to how an artificial neural neural network tunes itself - multiple iterations of tweaking weights and comparing results against a goal (in our case - "I find that noise compelling" or "I like that" or "cool sounds man")
Meditate on this I shall. :tu:
Is it finished?
Latest Tune:
Sounds: SoundCloud , Freesound
Racks: Big Case, Top Row, Funboat, Tinicase

Post Reply

Return to “Modular Synth General Discussion”