MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

How should I describe Digital Synths?
MUFF WIGGLER Forum Index -> Modular Synth General Discussion  
Author How should I describe Digital Synths?
boe_dye
When trying to explain analog synths, I find that your standards (VCO, LFO, Envelope, HP / LP, Function gens, and FM) are pretty easy to explain in simple terms, however I'm finding a bit of difficulty in trying to explain things like what Plaits or the telHARMONIC, or even a what Wavetable does.

They aren't in the realm of Analog, and yet I don't think it's correct to describe them in the same sense as a DX7 (That's Akemies Castle, and easy enough), and they definitely aren't sample based, so where should I put them in the conversation or explanation?

Be interested in hearing your thoughts on the matter!
hawkfuzz
seriously, i just don't get it who do you need to describe this to though?

It just like a amp modeler or cab simulator...
joem
Re: Plaits, have you looked on the MI website? It seems pretty well described there:

Quote:
8 synthesis models for pitched sounds

- Two detuned virtual analog oscillators with continuously variable waveforms.
- Variable slope triangle oscillator processed by a waveshaper and wavefolder.
- 2-operator FM with continuously variable feedback path.
- Two independently controllable formants modulated by a variable shape window (VOSIM, Pulsar, Grainlet, Casio CZ-style resonant filter…).
- 24-harmonic additive oscillator.
- Wavetable oscillator with four banks of 8x8 waves, with or without interpolation.
- Chord generator, with divide down string/organ emulation or wavetables.
- A collection of speech synthesis algorithms (formant filter, SAM, LPC), with phoneme control and formant shifting. Several banks of phonemes or segments of words are available.

8 synthesis models for noise and percussions

- Granular sawtooth or sine oscillator, with variable grain density, duration and frequency randomization.
- Clocked noise processed by a variable shape resonant filter.
- 8 layers of dust/particle noise processed by resonators or all-pass filters.
- Extended Karplus-Strong (aka Rings’ red mode), excited by bursts of white noise or dust noise.
- Modal resonator (aka Rings’ green mode), excited by a mallet or dust noise.
- Analog kick drum emulation (two flavors).
- Analog snare drum emulation (two flavors).
- Analog high-hat emulation (two flavors).
Dragonaut
I think the answer would definitely depend on the oscillator or synth and the person you were describing it to.
commodorejohn
A digital synthesizer is an algorithm or series of algorithms intended to transform a linear counter into a sequence of varying amplitude values. As for what exactly it does to achieve that or what the end result is, that's entirely dependent on the synthesizer in question (a wavetable, for example, is just a lookup table with the counter outputs used as an address/index.)
Nino
Without modulation digital oscillators tend to spit out more complex voltages than analogue.

Digital modules are digital and analogue, analogue modules are analogue only.
noisewreck
The difficulty as I see it is that there are many types of synthesis that exists in digital realm, and often combines with other synthesis methods.

Native Instruments’ range is a good example, for example when you compare Massive, Absynth and FM8, which are decidedly very digital in nature.

So, perhaps a better approach would be to discuss synthesis types rather than an umbrella term such as “digital synth”.
thetwlo
a good start would be books by Curtis Roads:
https://www.amazon.com/Curtis-Roads/e/B000AQ4N20/
boe_dye
commodorejohn wrote:
A digital synthesizer is an algorithm or series of algorithms intended to transform a linear counter into a sequence of varying amplitude values. As for what exactly it does to achieve that or what the end result is, that's entirely dependent on the synthesizer in question (a wavetable, for example, is just a lookup table with the counter outputs used as an address/index.)


Yes, I think this is what I was looking for. I found myself in conversation not giving a convincing explanation of it, and so I was hoping for some greater knowledge on the matter, that you have thankfully provided!

Cheers!
Homepage Englisch
In my opinion, there's a problem with nomenclature. Back in the day, there was a number of electrophone instruments, often, but not always with keyboard interface attached: novachords, claviolines, theremins, mellotrons, trautoniums, synthesizers, and more. After 1984 or so, with DX7, we kept calling those new digital devices "synthesizers", even if they're fundamentally different. Yes, most of them are keyboard electrophones, but the differences in workflow, execution and sound are bigger than differences between piano and celesta. And even in describing how "digital synths" work depends on the type of synthesis: samplers, FM, phase distortion, LA and so on.
boe_dye
Homepage Englisch wrote:
In my opinion, there's a problem with nomenclature. Back in the day, there was a number of electrophone instruments, often, but not always with keyboard interface attached: novachords, claviolines, theremins, mellotrons, trautoniums, synthesizers, and more. After 1984 or so, with DX7, we kept calling those new digital devices "synthesizers", even if they're fundamentally different. Yes, most of them are keyboard electrophones, but the differences in workflow, execution and sound are bigger than differences between piano and celesta. And even in describing how "digital synths" work depends on the type of synthesis: samplers, FM, phase distortion, LA and so on.


This is actually where the confusion I kept having was coming from, and I think you made a good point regarding the nomenclature. I hear 'digital synth", and I think DX7, but the DX7 is a type of digital synth. SO when I was trying to describe digital, i'm automatically going to the DX7 as a bench mark, and then I'm found asking "So what is the difference, and what am I missing?".

This is good, thank you!
strangegravity
01000100 01101001 01100111 01101001 01110100 01100001 01101100 00100000 01010011 01111001 01101110 01110100 01101000
in_sherman
they sound quantized. Analog does not because there are no discreet values, it is fully continuous with infinite resolution imparting a "wooden" timber to your sounds
MarcelP
in_sherman wrote:
they sound quantized. Analog does not because there are no discreet values, it is fully continuous with infinite resolution imparting a "wooden" timber to your sounds


I often wondered why my piano playing is described as “wooden” - I should try a digital piano!
cornutt
What you are touching on is "methods of synthesis", that is, the basic means by which electronic instruments create sounds and timbres. The most common ones are:

1. Subtractive: create a waveform containing a number of overtones, and then use filters to selectively remove or emphasize certain overtones.
2. Additive: combine a number of simple waveforms (usually sine waves) to build up overtones.
3. Amplitude modulation / frequency modulation / phase modulation: use one waveform to modulate the generation of another waveform, thereby creating a waveform with a pattern of overtones determined by the mathematics of the modulation.
4. Sampling: record a sound and then use various means to alter the playback of the sound.
5. Resynthesis: imposing the spectrum of one sound on another sound.
6. Physical modeling: emulating the behavior of a vibrating physical object.

Some of these, notably subtractive, are more easily done with analog circuitry. Others, such as additive and sampling, are more easily done with digital. However, all of them have been done with both.
Pelsea
It's not a matter of digital vs. analog circuitry it's the approach to making sound. Most instruments use one of these:

1. Mechanical--sound is the result of the vibration of wood, wire, membranes etc. interacting with a physical resonator. The inclusion of amplification does not alter the process.

2. Sampling--recordings of the above (or other things) are played on demand. Mellotrons are a common example of analog sampling, although Pierre Schaeffer did things with phono disks that were awfully close.

3. Electronic modeling of physical processes. Could be analog or digital. No one says the model has to be particularly accurate.

4. Mathematical modeling of physical processes. Again, analog or digital. The difference between this and 3 is that you develop an equation that describes the process, then repeatedly solve the equation.

A modular synthesizer is just a handy way of combining all of those possibilities in a box.
Dcramer
strangegravity wrote:
01000100 01101001 01100111 01101001 01110100 01100001 01101100 00100000 01010011 01111001 01101110 01110100 01101000

we're not worthy
matthewjuran
in_sherman wrote:
they sound quantized. Analog does not because there are no discreet values, it is fully continuous with infinite resolution imparting a "wooden" timber to your sounds

The mainstream view is the quantized signal is converted back to a fully continuous one with enough resolution to be equally detailed to analog for humans. The digital output rate for the instrument just has to be above 40kHz with a 16 bit resolution or better.

I wrote out an explanation of digital recording in this other thread that might be helpful: https://www.muffwiggler.com/forum/viewtopic.php?t=205261
boe_dye
strangegravity wrote:
01000100 01101001 01100111 01101001 01110100 01100001 01101100 00100000 01010011 01111001 01101110 01110100 01101000


HAHAHAHAHAHA

You guys have given me so much to work with on my future conversations and explanations -- thank you and keep it coming!
milkshake
matthewjuran wrote:
in_sherman wrote:
they sound quantized. Analog does not because there are no discreet values, it is fully continuous with infinite resolution imparting a "wooden" timber to your sounds

The mainstream view is the quantized signal is converted back to a fully continuous one with enough resolution to be equally detailed to analog for humans. The digital output rate for the instrument just has to be above 40kHz with a 16 bit resolution or better.

I wrote out an explanation of digital recording in this other thread that might be helpful: https://www.muffwiggler.com/forum/viewtopic.php?t=205261


Thanks for that reply matthewjuran, there's just soo much misconception about digital and analogue technology.

An analogue signal having infinite resolution is just wrong wrong wrong.
For the ones who want to learn a bit more: A Mathematical Theory of Communication by Claude Shannon.
Maybe the most important aspect of this paper is that a "bit" is a unit of information and it applies to both analogue systems and digital systems.


One other often misunderstood thing is that digital audio is a stream of numbers that represent a continuous signal. There are no steps in a digital signal. The confusion arises from sample editors. What you see in a sample editor is NOT the actual signal!
nigel
milkshake wrote:
One other often misunderstood thing is that digital audio is a stream of numbers that represent a continuous signal. There are no steps in a digital signal. The confusion arises from sample editors. What you see in a sample editor is NOT the actual signal!

The digital "signal" is just a sequence of sample values, so really it's nothing but a series of steps. However it can be used to exactly reconstruct the original (bandwidth limited) signal.
cretaceousear
hawkfuzz wrote:
seriously, i just don't get it who do you need to describe this to though?
It just like a amp modeler or cab simulator...

This comment is wrong. sad banana
Amp modellers/cab simulations are not the same thing in any way whatsoever - modellers are akin to convolution reverbs.
milkshake
nigel wrote:
milkshake wrote:
One other often misunderstood thing is that digital audio is a stream of numbers that represent a continuous signal. There are no steps in a digital signal. The confusion arises from sample editors. What you see in a sample editor is NOT the actual signal!

The digital "signal" is just a sequence of sample values, so really it's nothing but a series of steps. However it can be used to exactly reconstruct the original (bandwidth limited) signal.


A digital signal only has 2 values, that's what makes it so robust.
Iow digital audio is a row of binary numbers.

When you say steps, people confuse it with the picture in a sample editor.
Misk
definitely depends on the algorithms being used, but in general, you could just say "math". A digital signal is a series of steps, sure, but what you're hearing is the result of the way those 1s and 0s are interpolated as well.
Drakhe
Exhale
I'm yet heard a AD/DA chain that sounds like a copper wire.
Always something imparts, some blurring occurs.
Even on PRISM ones.

So yeah, in theory, after reconstruction filter, recorded waveform is similar to it's original analog form.
But some errors always present.

Try to take some snappy analog synth, no matter what.
I did it with Waldorf Pulse.

Feed left output to analog mixer, and right to ad/da chain and to the same mixer. A/B results.
Significantly less attack. No matter what you do, digital will smooth the sound.
But in theory, yeah, 96 khz is good enough to capture all details.
But....... human ears are very sensitive.
Espesially in midrange.

That's why I like to jam with analog synths pluged directly to analog mixer, little bit of fx, and then to speakers.
After digital stage lots of life mojo is lost.
Yes Powder
THIS ARGUMENT NEVER GETS OLD
Peter Grenader goatse.cx
Dave Peck
I agree with Cornutt. Asking 'how do I describe a digital synthesizer / digital synthesis'? is a bit like asking 'how do I describe weather'? There are a lot of different kinds.

There are several different types of synthesis that can all be done using digital synthesizer hardware & software (virtual analog subtractive, additive, FM/PM, sampling, wavetable, etc.). Some digital synths use only one of these types/methods of digital synthesis, some allow you to use more than one method.

So in general, it is any synthesizer that uses digital hardware and code to implement one or more of various methods of synthesis to create sound, as opposed to a synthesizer that uses analog electronic circuits to do this.
milkshake
Exhale wrote:

Feed left output to analog mixer, and right to ad/da chain and to the same mixer. A/B results.


That is not how you test these things, your results are meaningless.

Perceptual testing is very difficult, if you don't know what you are doing don't do it.


A simple test that everyone can do is to mix a song - 60dB below an other song. Can you hear that song?
flts
milkshake wrote:
nigel wrote:
milkshake wrote:
One other often misunderstood thing is that digital audio is a stream of numbers that represent a continuous signal. There are no steps in a digital signal. The confusion arises from sample editors. What you see in a sample editor is NOT the actual signal!

The digital "signal" is just a sequence of sample values, so really it's nothing but a series of steps. However it can be used to exactly reconstruct the original (bandwidth limited) signal.


A digital signal only has 2 values, that's what makes it so robust.
Iow digital audio is a row of binary numbers.

When you say steps, people confuse it with the picture in a sample editor.


This is getting into semantics territory, but as semantics is lot more fun to discuss than the endless gearslutz-level digital vs analog perceptual issues with flawed testing methods and the whole psychological shebang...

You're both right, and to understand the whole picture, both interpretations / meanings of the term are needed.

You are talking about the electronic, low level view of "digital signal", which is a binary (aka logic) signal of low voltage level and high voltage levels, 0 and 1.

Nigel is talking about "digital signal" in the context of digital signal processing (DSP), where that term in turn actually has the specific meaning of the discrete-time, discrete-amplitude sampled data - that is, the exact "waveform with steps" that one can zoom in on sample editor.

Ie. on a low level, your standard binary digital computer encodes sampled audio to binary data (0, 1). The signal processing algorithms, sample editors etc. process it on a level of abstraction where eg. a second of audio might be represented by 48000 samples, each of which can have an amplitude value from 0 to 65535 from an evenly-spaced quantized set of amplitude values (in the specific case of 16 bit unsigned, 48 KHz audio data).

Even though each of those example samples is, on a low level, represented by a row of sixteen "0"s and "1"s, on both abstraction levels a "digital signal" is the correct term (or one of the correct terms) to use. The context just needs to be mentioned.

And, looking from another angle, whether you want to say that the signal is a sequence of bits (from information-theoretic / electronic / cpu architecture / low level software dev point of view) or a sequence of numbers with given precision corresponding to quantized amplitude levels (from a DSP perspective), the Nyquist-Shannon sampling theorem says that there is no reason why, given high enough sampling rate to match the source signal, you couldn't reconstruct the original analog signal perfectly from that digital signal.
matthewjuran
A consideration when forming an opinion of an electronic instrument’s sound is that distortion doesn’t come just from the signal parameters.
cycad73
Digital synths reached their peak with the PPG 360 or maybe Alles synth/Crumar DGS and was all downhill from there... listen to early Laurie Spiegel or the first Rolf Trostel album you will be amazed at how these early digital synths can sound... in other words tone matters... musicality matters... the analog parts and specifics of the HW implementation above all matter....

samplers/romplers too, the best are like the Emulator I/II or the Kurzweil 250, which had separate voice cards and mixed all voices in an analog stage, the Emu also did NO DSP/sampling rate conversion, there was a separate variable-rate clock for each voice so you never had aliasing, if you needed a different pitch you simply clock the sample at a different rate (an analog operation). An engineer would look at that today and say there's a horrible overkill in components, we can simply do all this in software. but in the early 1980's all these tools were sold for > $10K because the technology had to prove itself musically against analog synths and acoustic instruments. so there was little incentive to make the process cheaper, so you had whatever was needed to get the best tone.

I am especially curmudgeonly these days after poring over 1 hr+ of google Nsynth videos and not hearing at any moment a single usable or inspiring sound. Just because everyone is an expert with software these days does not mean they can design something for use in a musical context. I have most respect for people like Olivier/Mutable or the mungo people who take ideas from the algorithms world but are overall agnostic as to the exact mix of technologies, because the bottom line is musicality.
commodorejohn
cycad73 wrote:
the Emu also did NO DSP/sampling rate conversion, there was a separate variable-rate clock for each voice so you never had aliasing, if you needed a different pitch you simply clock the sample at a different rate (an analog operation).

This is definitely an interesting facet of the design; effectively, this means that any noise introduced by the sampling and reconstruction process becomes additional harmonic (or enharmonic-but-tangibly-related) content in the output. I think this is also part of what makes the Amiga sound so distinctive.
cycad73
commodorejohn wrote:
cycad73 wrote:
the Emu also did NO DSP/sampling rate conversion, there was a separate variable-rate clock for each voice so you never had aliasing, if you needed a different pitch you simply clock the sample at a different rate (an analog operation).

This is definitely an interesting facet of the design; effectively, this means that any noise introduced by the sampling and reconstruction process becomes additional harmonic (or enharmonic-but-tangibly-related) content in the output. I think this is also part of what makes the Amiga sound so distinctive.


True, thanks! I should have corrected to say you don't have *inharmonic* aliasing (the objectionable kind). in a wavetable situation where you have a periodic and non-bandlimited waveform like a sawtooth you of course generate aliases by sampling, but if the sampling rate is an integer multiple of the fundamental these aliases fall exactly upon harmonics, meaning you have just a slight (and in most cases imperceptible) adjustment to the harmonic spectrum.

In fact the best kind of sound is when you forego the reconstruction filter and do a direct zero-order hold reconstruction, then you have a nice collection of spectral *images* at higher frequencies which are still integer multiples of the fundamental. Adds nice texture and grit without the bad sound of inharmonic aliasing wink

The Oberheim drum machines (DMX/DX) also used this technology, and I assume also the Linn machines and many others of the period although I've never owned the latter (always have been too much $$). If you pitch-shift a sample upwards on the DX it sounds in fact quite good, it's just played back at a higher rate.

There's also an overlap here with divide-down architectures, but that's another story...

Bottom line is it's historically been the *analog* parts of the architecture that lend digital synths their character, as much if not more than the algorithms employed. It annoys me to no end how engineers blindly apply the "principle of sufficient software", they devise more and more sophisticated DSP techniques to better approximate solving a problem that would just disappear if they focused on the analog part of the design or in general were more flexible about the paradigm.
commodorejohn
cycad73 wrote:
It annoys me to no end how engineers blindly apply the "principle of sufficient software", they devise more and more sophisticated DSP techniques to better approximate solving a problem that would just disappear if they focused on the analog part of the design or in general were more flexible about the paradigm.

Hear, hear.

Of course, that's because hardware solutions affect the cost per unit, while software solutions are amortized over every unit sold.
ludotex
applause we're not worthy
Drakhe wrote:
artieTwelve
Analogue synth = Continuous semi controlled chaos using as it's medium a subset of the electromagnetic spectrum.
Digital synth = Algorithmic semi controlled chaos that needs digital-to-analogue conversion to work in the same spectrum as analogue.
MUFF WIGGLER Forum Index -> Modular Synth General Discussion  
Page 1 of 2
Powered by phpBB © phpBB Group