digital modular synth - hardware questions

From circuitbending to homebrew stompboxes & synths, keep the DIY spirit alive!

Moderators: lisa, luketeaford, Kent, Joe.

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

digital modular synth - hardware questions

Post by felix le chat » Sun Jun 21, 2020 5:55 pm

Hello

I am looking for a dedicated hardware system to run my digital modular synthesizer.
Currently it is a collection of prototype modules programmed inside Pure Data or Max.
It can be ported to C or C++ for more freedom and performance, but anyway I would prefer running it on dedicated hardware instead of a generic computer (laptop, desktop, tablet) with a generic multipurpose operating system (Linux, Windows, MacOS)

I found 2 solutions that look appropriated:

Bela.io system
https://github.com/BelaPlatform/Bela/wiki
This system is based on the Beaglebone Black single-board computer with a PRU (programmable realtime unit) that can run audio software with sub-millisecond latency. NOTE: I found almost nothing on PRUs, especially about the difference between a PRU and a FPGA (at the moment, they look similar to me).
The audio API (really the boring part for me) is provided by the Bela.io project so all you need is writing DSP code in C or C++. The board also has basic but convenient I/Os (including audio and MIDI).
Has anybody tried it? I wonder which kind of performance you can get, especially when polyphony or oversampling are required


FPGA system
FPGAs look great (if not the current best) for realtime digital synthesis, but I have just started to study them. Here are 3 links that I found helpful even if I still did not get into them in detail:
https://www.fpga4fun.com/
https://www.nandland.com/articles/fpga- ... nners.html
https://numato.com/kb/learning-fpga-ver ... uction-a7/

As far as I understand I would need at least
a) the FPGA
b) 2 audio ADCs (for external processing)
c) 2 audio DACs (possibly 4 or 8)
NOTE: digital audio I/Os like ADAT, S/PDIF or AES would be OK even though they require an external converter box
d) several control ADCs and general purpose digital I/O for adding potentiometers, buttons, 2 continuous pedal inputs, and perhaps a screen or 7-segment display
e) something that allows to plug a MIDI controller keyboard or drum pad
f) a control processor (PIC, ARM, etc) that loads the FPGA software on boot, and possibly allows to switch between different softwares
g) psu

What is the most simple way to build it? I won't be able to design and print a PCB, but it is possible to use several existing PCBs connected together (ie one for the FPGA, one for the converters, one for the control processor). If it helps I have a Raspberry Pi 3 which can probably be used as the control processor.
Or, is there an all-in-one board with all required hardware?
Something similar to these boards:
https://www.xilinx.com/products/boards- ... /arty.html
https://store.digilentinc.com/nexys-vid ... lications/
(https://reference.digilentinc.com/_medi ... deo_rm.pdf)
The second one looks overkill and it is too expensive for me, it is only an example.


So what would you recommend?
1) Bela.io system
2) FPGA system
3) something else that has similar functionality
4) forget it, you are crazy, it is too difficult and takes too much time, please buy a commercial synthesizer or use your Linux/MacOS/Windows computer


Best regards

User avatar
emmaker
Veteran Wiggler
Posts: 622
Joined: Sat Mar 10, 2012 5:07 pm
Location: PDX

Re: digital modular synth - hardware questions

Post by emmaker » Sun Jun 21, 2020 6:55 pm

These might give you some other ideas:

viewtopic.php?f=17&t=228165 - To me it doesn't enough SPI/I2C ports for CV stuff but does have the audio interface and lots of memory.
viewtopic.php?f=17&t=218328&p=3274471&h ... i#p3274471
https://www.pjrc.com/store/teensy41.html
https://www.pjrc.com/store/teensy3_audio.html - Get the correct one for the 4.X if you go that route.

One thing I've been thinking about lately is using a Raspberry PI and display as the GUI and have it talking to the 'music engine' via SPI. PI and displays are cheap and having it run the GUI would give the 'music engine' more headroom for just music stuff.

Jay S.

nigel
Veteran Wiggler
Posts: 676
Joined: Wed Jan 16, 2013 6:49 am
Location: Melbourne, Australia

Re: digital modular synth - hardware questions

Post by nigel » Sun Jun 21, 2020 8:00 pm

Have you seen the Axoloti? http://www.axoloti.com/

Elahrairah
Learning to Wiggle
Posts: 24
Joined: Sat May 02, 2020 12:25 pm

Re: digital modular synth - hardware questions

Post by Elahrairah » Sun Jun 21, 2020 10:11 pm

Of course I don't completely understand the goal you're working towards, but I want to strongly recommend you stand on the shoulders of giants. It's 2020 and if you want to finish your project in the next few years, you should use something that does the work for you. Something open source will be available for your use and has already sorted most of your issues. I'm saying Linux.

A Raspberry Pi may or may not be enough CPU power for your use, but it will interface with any hardware you need, and then can be thrown in a black box and run without a screen and nobody will have to know you're cheating. Get the thing running Linux and an off-the-shelf audio input/output chip. Whole thing costs <75$, it's reproducible and reliable.
If you need faster there are a hundred similar devices that scale up the horsepower and cost.

FPGA can do amazing things but you're going to have to hire a few engineers and develop the solution. Do you really need sub-microsecond responses?

--- I just looked into the Bela system, and it is what I have recommended above: a single board computer and some software reconfiguration will get you where you want to go. I don't know if they're a good product per se... but they have done a lot of the work for you, so if it can work, then it's a huge labor savings.

Edit: If Linux is too big and heavy and slow and annoying, or if your CPU requirements are small enough, then the Teensy, as suggested by emmaker above, is a great solution. It's smaller than a raspberry pi, and lighter than linux, but faster than arduino. We don't know how much processing power your idea will need, but teensy runs stuff like Ornament & Crime, or Morphagene or Clouds. Is your project more similar to Clouds, or more similar to Moog Subharmonicon? Lots of little chips available that are 100x easier to use than an FPGA.

User avatar
EATyourGUITAR
has no life
Posts: 4812
Joined: Tue Aug 31, 2010 12:24 am
Location: Providence, RI, USA

Re: digital modular synth - hardware questions

Post by EATyourGUITAR » Thu Jun 25, 2020 11:08 am

Buy a used monome aleph. Learn to program blackfin DSP using the monome Linux distro for blackfin development. $1000 for the aleph. The software is free. If you want a professional dev kit with a floating license it will be about $5000 for the black fin developer software. Another option would be kyma x. This is already a big project to re learn everything. If you want to be a bare metal embedded DSP application engineer then you have a lot of work ahead. Programming FPGA + IO + DSP. Is not an easy first project. There are microcontrollers that have FPGA packaged together but there are also regular FPGA. Using a platform with lots of convenient libraries does not always make things easy or simple. I don't know how much DSP power you need. I do know that eventide H9000 has DDR3 and a massive FPGA. Even if you started with a working H9000 you would be there for 6 years learning how to program it. It is almost as bad as a PC. Writing an operating system from scratch on a PC would be insanity. The project needs to have limits.
WWW.EATYOURGUITAR.COM <---- MY DIY STUFF

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Thu Jun 25, 2020 5:50 pm

Many thanks for your answers

What I need is a performance synthesizer that can make the sounds I want. It would usually be played live with a keyboard or a multipad, plus built-in sequencers sometimes.
I have checked the commercial synthesizers but none of them are close enough to what I need, even (impractically) using several great synths. I have programmed synths in Max or Pd and I know how to port them to C or C++ (possibly relying on open-source stuff when required), except that I will never write an audio API myself.
Why not use a computer then?
- it is either too big or too fragile, especially for some live concerts
- latency is not ideal (low latency and low jitter are more important than huge CPU power, I am looking for the contrary to a throughput-oriented device)

The Nord Micro Modular is perhaps the closest thing to what I need, because it allows to program a lot of things. I started to learn synthesis with this machine, I still have it and the only thing I don't like is how the polyphony is managed (only duplicated voices). That said most of my computer audio programs cannot be ported to the Nord Micro Modular, it has its limits.
So yes my machine would be a kind of '2020 Nord Micro Modular, albeit without the graphical modular-like patch-programming user interface. Instead, I would have a library of very simple modules (C or C++ functions or objects without a graphical interface) and create "patches" using these modules; actually each "patch" would be a compiled C or C++ program that I could then load and use in live.

Likely examples: 16-voice Karplus-Strong synth, east or west coast style monophonic synths, 8-voice synth with 8 very different voices, effects processors (reverb, distortion, granulator, etc), minimalist polyphonic samplers, sequencers, etc

Will never happen: vintage or modern gear emulation, huge polyphony, 12 convolution reverbs in parallel, 32bit 192kHz output (16-24bit 44100-48000 Hz is ideal, oversampling will be done inside the programs only in some particular functions that require it), streaming huge sample banks from a separated drive, recording, etc

In my experience the most CPU-intensive tasks on a computer are:
- local oversampling to reduce aliasing
- very short delays with feedback (this includes feedback patching)
- polyphony
So the worst case would be a 8-16 voice polyphonic synth that has 16x oversampling for some parts of the voices (like, for FM or wave folders) and tuned Karplus-Strong resonators.
This does not apply to FPGA systems but they should still have worst cases as well.


As far as I understand 4 types of hardware would work:
- general-purpose CPU
- DSP CPU
- hybrid system like Bela (general-purpose + PRU)
- FPGA

I know nothing about
- Arduino, Python, Faust
- the practical differences between coding in C/C++ for a general-purpose processor and coding in C/C++ for a DSP processor
- what makes a specific hardware more adapted to low latency audio (besides not being used for non-audio stuff at the same time and not using slow interfaces like USB)


I checked the systems you recommended --

Axoloti
This one is too close to Max / Micro Modular / etc (graphical programming). I want to program the modules myself

Daisy
Looks good because of the low latency and the built-in audio converters.
If I understand correctly, it features a C++ audio API like in the Bela system. Besides this I have no idea which one is more adapted, but they both look good and simple enough

Teensy
This one looks cool too but I don't understand clearly how it is programmed. I guess it is because it looks related to Arduino which I have zero experience about

FPGA
For me the advantages of FGPAs are
- lowest latency (though of course the audio converters still have some "considerable" latency)
- high internal sampling rate so no need for oversampling (basically you don't have to care about aliasing, except when you do want to add it as a famous vintage digital effect)

What is difficult exactly, when using FPGAs?
I have never used Verilog nor VHDL but I understand the concept and would be ok to learn it if necessary.
I don't have a clear idea of which hardware I would need, but there must be an all-in-one solution that has all required hardware so that it is ready to be programmed, right?

(edited)
Last edited by felix le chat on Thu Jun 25, 2020 7:21 pm, edited 1 time in total.

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Thu Jun 25, 2020 6:24 pm

Elahrairah wrote:
Sun Jun 21, 2020 10:11 pm
Of course I don't completely understand the goal you're working towards,
(see the message above)
but I want to strongly recommend you stand on the shoulders of giants. It's 2020 and if you want to finish your project in the next few years, you should use something that does the work for you. Something open source will be available for your use and has already sorted most of your issues. I'm saying Linux.
I totally agree. When I said I wanted to make the modules myself, I was only talking from a sound and functionality point of view. I will not want to re-invent the wheel. Open source and Linux are great for this.
A Raspberry Pi may or may not be enough CPU power for your use, but it will interface with any hardware you need, and then can be thrown in a black box and run without a screen and nobody will have to know you're cheating. Get the thing running Linux and an off-the-shelf audio input/output chip. Whole thing costs <75$, it's reproducible and reliable.
If you need faster there are a hundred similar devices that scale up the horsepower and cost.
I have a Raspberry Pi 3, and it is probably a good solution. About 2 years ago I measured the audio latency and the results were promising. Then I did not have any time for this project, and meanwhile I discovered some similar devices that were apparently more adapted to low-latency realtime use (like Bela)
FPGA can do amazing things but you're going to have to hire a few engineers and develop the solution. Do you really need sub-microsecond responses?
This is limited by the audio D/A converters and by MIDI control anyway. I don't need sub-microsecond response, but sub-millisecond would be great! If the latency is worse than a x86 or amd64 laptop, then this project becomes kind of pointless. When playing fast tracks with complex rhythms, low latency helps a lot, it feels more like an acoustic instrument and it is easier to play.

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Thu Jun 25, 2020 6:43 pm

EATyourGUITAR wrote:
Thu Jun 25, 2020 11:08 am
Buy a used monome aleph. Learn to program blackfin DSP using the monome Linux distro for blackfin development. $1000 for the aleph. The software is free.
It is too expensive for me unfortunately. The initial budget was rather 250 max
If you want a professional dev kit with a floating license it will be about $5000 for the black fin developer software.
No need for it, at the moment I only want to make a fully-working prototype that I can reliably use in practice. Even if it is really good and later becomes more user-friendly, I might try to team with a reputable manufacturer, but it is very unlikely to happen and I am not sure people would be interested in it anyway
Another option would be kyma x.
Will look later, the site is down at the moment
This is already a big project to re learn everything. If you want to be a bare metal embedded DSP application engineer then you have a lot of work ahead. Programming FPGA + IO + DSP. Is not an easy first project.
Why exactly?
There are microcontrollers that have FPGA packaged together but there are also regular FPGA.
Do you know an all-in-one FPGA+microcontroller board that has all the required hardware for making a synthesizer?
I don't know how much DSP power you need.
I don't need so much DSP power. Low latency and low jitter are way more important. See 2 messages aboves for more details about this
I do know that eventide H9000 has DDR3 and a massive FPGA. Even if you started with a working H9000 you would be there for 6 years learning how to program it.
Overkill, and I am not even sure it can do all I need (not enough budget anyway)

User avatar
emmaker
Veteran Wiggler
Posts: 622
Joined: Sat Mar 10, 2012 5:07 pm
Location: PDX

Re: digital modular synth - hardware questions

Post by emmaker » Thu Jun 25, 2020 8:24 pm

Unless you can design a computer from scratch I'd stay away from FPGAs. That's a simplistic view of what you'd be doing with a FPGA (actually multiple compute engines). You do have access to high level logic blocks (adders, multipliers) and logic cores (serial, SPI, I2C, DDR interfaces) but still not an easy task.

The way I'd approach making a synth like this would be write the code in C on a PC (Windows/Mac/Linux). I have an I7 PC and I can run 2-3 soft synths without major issues so doing a single soft synth should work fine. Another thing about doing it this way is development and debug will be faster and easier. I've actually burnt out (programmed chips till they wouldn't program any more) chips programming them 5-20 times a day over long periods of time.

I would do this implementation with abstraction layers for the UI and IO. Start looking at what a number of embedded processors do for there audio interfaces so you could implement something like that for the hardware abstraction layer before starting. This would be done concentrating on an efficient 'pipeline' and the ability to define buffer sizes for low latency. Here I wouldn't worry about latency and concentrate more on functionality, architecture and algorithms. If you can't get this to work then you probably won't be able to get an embedded solution to work.

After that is done I'd move it to an embedded design. If you have troubles (or think you will) with throughput think of a distributed architecture with multiple processors handing different components of the system.

Good luck.
Jay S.

nigel
Veteran Wiggler
Posts: 676
Joined: Wed Jan 16, 2013 6:49 am
Location: Melbourne, Australia

Re: digital modular synth - hardware questions

Post by nigel » Thu Jun 25, 2020 8:37 pm

felix le chat wrote:
Thu Jun 25, 2020 5:50 pm
I checked the systems you recommended --
Axoloti
This one is too close to Max / Micro Modular / etc (graphical programming). I want to program the modules myself
I'm not an Axoloti user, but if I'm understanding it correctly, you can program your own objects quite easily.

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Fri Jun 26, 2020 4:58 pm

emmaker wrote:
Thu Jun 25, 2020 8:24 pm
Unless you can design a computer from scratch I'd stay away from FPGAs. That's a simplistic view of what you'd be doing with a FPGA (actually multiple compute engines). You do have access to high level logic blocks (adders, multipliers) and logic cores (serial, SPI, I2C, DDR interfaces) but still not an easy task.
OK, so if I understand correctly there are no FPGA prototyping boards that have all hardware + the software infrastructure for using audio. In this case yes it would be risky (as I already said I will not want to develop an audio API for a C/C++ solution, but of course this stands for FPGAs/Verilog as well).
Closest thing to designing a computer from scratch was building a MIDI control surface with a PIC16F877, multiplexers, sliders, screen, and various utility components, but the C code for the PIC was rather simple, and I did not build the PIC itself
The way I'd approach making a synth like this would be write the code in C on a PC (Windows/Mac/Linux).
Just to be sure, why C and not C++? I am asking because two solutions mentioned in this discussion (Bela and Daisy) have a built-in C++ audio API. Then for sure
- C is faster than C++
- C++ is still fast
- you can use C-like functions in C++
- you can do object programming in C
- for my project, synthesis modules can either be C++ objects or C/C++ functions.

On a PC I would use either portaudio (C) or rtaudio (C++) as audio API, unless there is something better today that I missed
I have an I7 PC and I can run 2-3 soft synths without major issues so doing a single soft synth should work fine.
Sure, what I want to do can already run in Max on my 10+year old PC (also an i7)
Another thing about doing it this way is development and debug will be faster and easier. I've actually burnt out (programmed chips till they wouldn't program any more) chips programming them 5-20 times a day over long periods of time.

I would do this implementation with abstraction layers for the UI and IO. Start looking at what a number of embedded processors do for there audio interfaces so you could implement something like that for the hardware abstraction layer before starting. This would be done concentrating on an efficient 'pipeline' and the ability to define buffer sizes for low latency. Here I wouldn't worry about latency and concentrate more on functionality, architecture and algorithms. If you can't get this to work then you probably won't be able to get an embedded solution to work.
Yes, this is a good strategy. Abstractions layers were planned from the beginning (UI, IO but also audio API and "patch" / module rendering order), because this way I can use almost the same code on several platforms (PC standalone, PC VST, embedded, etc) just by changing the audio API. This is something impossible with a FPGA system
After that is done I'd move it to an embedded design. If you have troubles (or think you will) with throughput think of a distributed architecture with multiple processors handing different components of the system.
This is what Bela does (audio is processed by a PRU without using ALSA, but Linux still runs in parallel behind and you can use both)
Good luck.
Jay S.
Thanks!

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Fri Jun 26, 2020 6:30 pm

nigel wrote:
Thu Jun 25, 2020 8:37 pm
felix le chat wrote:
Thu Jun 25, 2020 5:50 pm
I checked the systems you recommended --
Axoloti
This one is too close to Max / Micro Modular / etc (graphical programming). I want to program the modules myself
I'm not an Axoloti user, but if I'm understanding it correctly, you can program your own objects quite easily.
Thanks, yes you are right:
http://community.axoloti.com/t/coding-a ... jects/2606

User avatar
emmaker
Veteran Wiggler
Posts: 622
Joined: Sat Mar 10, 2012 5:07 pm
Location: PDX

Re: digital modular synth - hardware questions

Post by emmaker » Fri Jun 26, 2020 6:56 pm

felix le chat wrote:
Fri Jun 26, 2020 4:58 pm
emmaker wrote:
Thu Jun 25, 2020 8:24 pm
Unless you can design a computer from scratch I'd stay away from FPGAs. That's a simplistic view of what you'd be doing with a FPGA (actually multiple compute engines). You do have access to high level logic blocks (adders, multipliers) and logic cores (serial, SPI, I2C, DDR interfaces) but still not an easy task.
OK, so if I understand correctly there are no FPGA prototyping boards that have all hardware + the software infrastructure for using audio. In this case yes it would be risky (as I already said I will not want to develop an audio API for a C/C++ solution, but of course this stands for FPGAs/Verilog as well).
Closest thing to designing a computer from scratch was building a MIDI control surface with a PIC16F877, multiplexers, sliders, screen, and various utility components, but the C code for the PIC was rather simple, and I did not build the PIC itself
I didn't say that very well. By computer I really meant a processor at the logic level, actually multiple ones probably (VCO, filter, distortion, ...). You would have to define the processor registers, buses, ALU, IO and sequencer/micro-code that run your algorithms with those processors. Some FPGAs do have processor cores in their libraries but you might have to buy them and in my opinion at that point you might as well use a standard processor. Then you'd have to add all your peripherals (ADCs, DACs, digital IO) on top of that.
felix le chat wrote:
Fri Jun 26, 2020 4:58 pm
The way I'd approach making a synth like this would be write the code in C on a PC (Windows/Mac/Linux).
Just to be sure, why C and not C++? I am asking because two solutions mentioned in this discussion (Bela and Daisy) have a built-in C++ audio API. Then for sure
- C is faster than C++
- C++ is still fast
- you can use C-like functions in C++
- you can do object programming in C
- for my project, synthesis modules can either be C++ objects or C/C++ functions.

On a PC I would use either portaudio (C) or rtaudio (C++) as audio API, unless there is something better today that I missed
I've worked mainly in embedded/bare metal systems for a long time, before C++ as a contractor. Recently I've worked on things that go from NXP KL02 (4K RAM, 32K Flash) to dual Xeon based system (768G RAM, 7TB SSD, don't ask). Most work lately has been with NXP and STMicro ARM chips though using Eclipse development environments.

I'd say probably most of my bias against C++ is historic. In general though I view it as using a sledge hammer when a regular hammer works just fine. C++ was initially designed to handle objects, initially was not that efficient and had a lot of bloat in its libraries. So if you want to write an OS based application that uses a lot of objects use it. In a 'typical' embedded environment your don't have a lot of objects, you might be memory limited and you might want things running as fast as they can. Also I find C++ code hard to debug if you didn't write the code yourself and a lot of abstraction/overloading is used.

That being said if people are writing their libraries in C++ that is what you have to use and as you said you can write C code in the C++ environment. I just got a couple of Daisies and will be playing with them so I'll see how they work with their libraries.

One thing you might do is get a cheap ARM core (Teensy, Nucleo, ...) and write something that represents what you are doing (filter, vco, ...) in C++ and port it to C. Then look to see how efficient and fast they are.

FYI: One way to speed up code, sometimes significantly is to use 'register' based variables. Pointers to structures/objects that are used in routines a lot are good candidates along with most used variables in a routine. The number of register variables you have available is dependent on the processor and compiler you are using. You will have to look that up. Another thing that has helped me out a lot is figuring out how to optimize the math and use precomputed tables. With a lot of processors now days they have a lot of Flash. So if you have Flash left over see if you can pecompute some numbers and put them in Flash. Had one project where there was 300K+ of Flash and the code used about 32K. Took most the rest of Flash and put in some fixed point tables of precomputed data and that made things faster and cut the code down.

About portaudio or rtaudio unless they are available for your target system or you're going to rewrite them for your target system don't use them. You really won't be testing the code out you'll be running on the target if you do. You want to change as little as code possible when you go from the PC to target.
felix le chat wrote:
Fri Jun 26, 2020 4:58 pm
I have an I7 PC and I can run 2-3 soft synths without major issues so doing a single soft synth should work fine.
Sure, what I want to do can already run in Max on my 10+year old PC (also an i7)
Another thing about doing it this way is development and debug will be faster and easier. I've actually burnt out (programmed chips till they wouldn't program any more) chips programming them 5-20 times a day over long periods of time.

I would do this implementation with abstraction layers for the UI and IO. Start looking at what a number of embedded processors do for there audio interfaces so you could implement something like that for the hardware abstraction layer before starting. This would be done concentrating on an efficient 'pipeline' and the ability to define buffer sizes for low latency. Here I wouldn't worry about latency and concentrate more on functionality, architecture and algorithms. If you can't get this to work then you probably won't be able to get an embedded solution to work.
Yes, this is a good strategy. Abstractions layers were planned from the beginning (UI, IO but also audio API and "patch" / module rendering order), because this way I can use almost the same code on several platforms (PC standalone, PC VST, embedded, etc) just by changing the audio API. This is something impossible with a FPGA system
After that is done I'd move it to an embedded design. If you have troubles (or think you will) with throughput think of a distributed architecture with multiple processors handing different components of the system.
This is what Bela does (audio is processed by a PRU without using ALSA, but Linux still runs in parallel behind and you can use both)
Good luck.
Jay S.
Thanks!
Happy coding.
Jay S.

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Mon Jun 29, 2020 5:07 pm

emmaker wrote:
Fri Jun 26, 2020 6:56 pm
I'd say probably most of my bias against C++ is historic. In general though I view it as using a sledge hammer when a regular hammer works just fine. C++ was initially designed to handle objects, initially was not that efficient and had a lot of bloat in its libraries. So if you want to write an OS based application that uses a lot of objects use it. In a 'typical' embedded environment your don't have a lot of objects, you might be memory limited and you might want things running as fast as they can. Also I find C++ code hard to debug if you didn't write the code yourself and a lot of abstraction/overloading is used.

That being said if people are writing their libraries in C++ that is what you have to use and as you said you can write C code in the C++ environment. I just got a couple of Daisies and will be playing with them so I'll see how they work with their libraries.

One thing you might do is get a cheap ARM core (Teensy, Nucleo, ...) and write something that represents what you are doing (filter, vco, ...) in C++ and port it to C. Then look to see how efficient and fast they are.
Yes, the only reason to use C++ is because the audio API is written in C++, otherwise I would use C for this project
FYI: One way to speed up code, sometimes significantly is to use 'register' based variables. Pointers to structures/objects that are used in routines a lot are good candidates along with most used variables in a routine. The number of register variables you have available is dependent on the processor and compiler you are using. You will have to look that up. Another thing that has helped me out a lot is figuring out how to optimize the math and use precomputed tables. With a lot of processors now days they have a lot of Flash. So if you have Flash left over see if you can pecompute some numbers and put them in Flash. Had one project where there was 300K+ of Flash and the code used about 32K. Took most the rest of Flash and put in some fixed point tables of precomputed data and that made things faster and cut the code down.
Great tips
Thank you very much for sharing your experience :tu:
About portaudio or rtaudio unless they are available for your target system or you're going to rewrite them for your target system don't use them. You really won't be testing the code out you'll be running on the target if you do. You want to change as little as code possible when you go from the PC to target.
Sorry but I am not sure I understand it correctly. Especially for testing, it would be good to hear the sound result in realtime (even with a huge latency). Is it possible to use the Daisy or Bela audio APIs on a Linux/Mac/Windows PC, if I choose these target boards? I did not find any information about this.
Or did you mean just running the synthesis algorithms in non-realtime (no audio output), write the result as a PCM WAV audio file, and open it in a player or DAW to check?

User avatar
EATyourGUITAR
has no life
Posts: 4812
Joined: Tue Aug 31, 2010 12:24 am
Location: Providence, RI, USA

Re: digital modular synth - hardware questions

Post by EATyourGUITAR » Mon Jun 29, 2020 5:51 pm

in embedded programming you don't always get to choose if you will exclude C++. CMSIS is used in pretty much all ARM microcontrollers unless you try pretty hard to avoid it. CMSIS is C++. there is something you see a lot in embedded C. extern c plus plus.

https://embeddedartistry.com/blog/2017/ ... -extern-c/

there is a lot that can go wrong with a arm cortex m3 m4 when using the compiler to generate the assembler but also directly reading and writing registers. R7 through R12 can have side affects or you can just use them as globals. you can use them as temps if you always use them as temps. R0 through R5 get backed up and restored when a sub routine is called. you could in theory use them as globals but the compiler will generate code that uses R0 through R3 with absolutely no respect for your data that you have stored there. this leaves R4 and R5 as the only safe private transient (not persistent) register storage. R6 through R12 you must be consistent how you use these globals. if they are garbage storage then they must always be garbage storage for that one register. the good news is that you can use bit fields or half words when using registers to store data. in that way, you actually have a lot more to work with. the code can be very difficult to read though if everything is just ADD R6, R7, R8. easy to forgot what R6 holds without a name.
WWW.EATYOURGUITAR.COM <---- MY DIY STUFF

User avatar
emmaker
Veteran Wiggler
Posts: 622
Joined: Sat Mar 10, 2012 5:07 pm
Location: PDX

Re: digital modular synth - hardware questions

Post by emmaker » Mon Jun 29, 2020 6:01 pm

felix le chat wrote:
Mon Jun 29, 2020 5:07 pm
About portaudio or rtaudio unless they are available for your target system or you're going to rewrite them for your target system don't use them. You really won't be testing the code out you'll be running on the target if you do. You want to change as little as code possible when you go from the PC to target.
Sorry but I am not sure I understand it correctly. Especially for testing, it would be good to hear the sound result in realtime (even with a huge latency). Is it possible to use the Daisy or Bela audio APIs on a Linux/Mac/Windows PC, if I choose these target boards? I did not find any information about this.
Or did you mean just running the synthesis algorithms in non-realtime (no audio output), write the result as a PCM WAV audio file, and open it in a player or DAW to check?
Guess what I was really trying to say is make an abstraction layer for your audio input/output for portaudio or rtaudio if you use it on the host and don't call them directly. When you pick your target (Daisy, Teensy, .... ) write the abstraction layer for your target.

If there are a lot of common parameters between the host and target audio interfaces it's better to use macros to define the audio abstraction layer then hard code. If you hard code your abstraction code there's going to be subroutine calls to it and then the audio code used. With macros the code is embedded inline which is faster but harder to debug if you need to do that. But then it's hard to say if you'll really gain anything speed wise. Really depends on the processor, compiler and libraries.

Good luck.
Jay S.

User avatar
EATyourGUITAR
has no life
Posts: 4812
Joined: Tue Aug 31, 2010 12:24 am
Location: Providence, RI, USA

Re: digital modular synth - hardware questions

Post by EATyourGUITAR » Mon Jun 29, 2020 8:43 pm

for a beginner, don't you think it would be easier to start over than to write a library that fits an existing API that is in fact cross platform? all while at the same time writing the application. think beginner level. at that level they don't know what microcontroller to select. they don't know what book to read. they don't always know the basics. something like the daisy would be perfect if and when it actually has a huge community with lots of code examples. 480MHz is pretty serious. when you figure out that you can use DMA with external RAM then you really get an advantage for clock cycles. you could argue that this effectively increases RAM bandwith and reduces RAM latency. some microcontrollers can DMA the ADC and DAC also. if this is the case then you probably don't want to write your own hardware abstraction layer. the off the shelf libraries are probably way more efficient when compiled. DMA can be vendor specific. the only reason to write your own hardware abstraction layer would be if you want %100 compatibility when porting code between different architectures. if you had a huge pile of existing code that needed to be cross-compiled for example. rewriting the lib avoids emulating the architecture in detail using less efficient methods. I agree that if this is all done in macros then great. if this is not done in macros then maybe not great. however some things must be done with interrupts so it becomes a subroutine even if it is inline inside an ISR. that would be an example of no difference in performance.
WWW.EATYOURGUITAR.COM <---- MY DIY STUFF

User avatar
emmaker
Veteran Wiggler
Posts: 622
Joined: Sat Mar 10, 2012 5:07 pm
Location: PDX

Re: digital modular synth - hardware questions

Post by emmaker » Mon Jun 29, 2020 11:05 pm

EATyourGUITAR wrote:
Mon Jun 29, 2020 8:43 pm
for a beginner, don't you think it would be easier to start over than to write a library that fits an existing API that is in fact cross platform? all while at the same time writing the application. think beginner level. at that level they don't know what microcontroller to select. they don't know what book to read. they don't always know the basics. something like the daisy would be perfect if and when it actually has a huge community with lots of code examples. 480MHz is pretty serious. when you figure out that you can use DMA with external RAM then you really get an advantage for clock cycles. you could argue that this effectively increases RAM bandwith and reduces RAM latency. some microcontrollers can DMA the ADC and DAC also. if this is the case then you probably don't want to write your own hardware abstraction layer. the off the shelf libraries are probably way more efficient when compiled. DMA can be vendor specific. the only reason to write your own hardware abstraction layer would be if you want %100 compatibility when porting code between different architectures. if you had a huge pile of existing code that needed to be cross-compiled for example. rewriting the lib avoids emulating the architecture in detail using less efficient methods. I agree that if this is all done in macros then great. if this is not done in macros then maybe not great. however some things must be done with interrupts so it becomes a subroutine even if it is inline inside an ISR. that would be an example of no difference in performance.
I don't think you are understanding what I'm saying and are making a more or less trivial problem more complicated than needed. Don't know if it's the way I've stated things or the way you're understanding them.

Maybe I should of said library abstraction layer instead of hardware abstraction layer. Places I worked call it a hardware abstraction layer if it is on top of a HW library, HW driver or the actual hardware.

There's really no reason to write any hardware level, driver or library code here if the right hardware with software is chosen. The library abstraction layer shouldn't be more that 2-4 pages of code. Biggest issue would be handling callbacks for buffers.

If you want you can get your target setup with its libraries and do all your development on it and avoid working on a host. If I'm doing heavy duty math or complicated stuff I'll do the core code/algorithms on a Window/Linux PC host and get it going there. It's faster not having to program a device and debugging is easier. Then I move it to the target and optimize it.

Jay S.

User avatar
EATyourGUITAR
has no life
Posts: 4812
Joined: Tue Aug 31, 2010 12:24 am
Location: Providence, RI, USA

Re: digital modular synth - hardware questions

Post by EATyourGUITAR » Mon Jun 29, 2020 11:41 pm

emmaker wrote:
Mon Jun 29, 2020 11:05 pm
I don't think you are understanding what I'm saying and are making a more or less trivial problem more complicated than needed. Don't know if it's the way I've stated things or the way you're understanding them.
I think you presenting advanced techniques to someone who is a beginner. assuming that they will most likely end up on an arm M3, M4, or H7, I don't see where a lot of this advice is applicable. although I admit that I am not as experienced in Arm development as you I have been coding my whole life so I am somewhere in the middle. I don't think I am making it complicated. I think the things that you find trivial would really be based on knowledge of a plethora of really obscure factoids about the arm V7m instruction set in this 858 reference manual. if someone is starting to learn Arm, let them write it in C, not assembler or even worse mixed assembler and C. I had to learn GNU linker language and assembler and C and C++, GCC, Arm v7, CMSIS, ABI, vendor specific peripheral libraries and I still suck at it. as a Haskell programmer, this whole idea of side affects is against my best practices although I am sure you can make it work. we are just complete opposites in how we approach the same objective. I know what registers are in use by CMSIS and vendor driver libs but this is not trivial. this is not self explanatory. to make it safer you can use the MPU. how is a beginner going to grasp this?
emmaker wrote:
Mon Jun 29, 2020 11:05 pm
Maybe I should of said library abstraction layer instead of hardware abstraction layer. Places I worked call it a hardware abstraction layer if it is on top of a HW library, HW driver or the actual hardware.

There's really no reason to write any hardware level, driver or library code here if the right hardware with software is chosen. The library abstraction layer shouldn't be more that 2-4 pages of code. Biggest issue would be handling callbacks for buffers.
I have no idea what exactly you mean when you say hardware abstraction layer. that could be a complete integrated driver and API written in vendor specific assembler intrinsics or it could be some simple parsing macro that makes your code cross compile easy porting over from the computer. so probably what I thought you meant was me filling in the blanks.

there is a trade off when you start using a lot of abstraction. your code becomes more readable in main but more complicated overall. this only helps beginners if someone does everything for you as an off the shelf lib.
emmaker wrote:
Mon Jun 29, 2020 11:05 pm
If you want you can get your target setup with its libraries and do all your development on it and avoid working on a host. If I'm doing heavy duty math or complicated stuff I'll do the core code/algorithms on a Window/Linux PC host and get it going there. It's faster not having to program a device and debugging is easier. Then I move it to the target and optimize it.

Jay S.
can you tell me more about this? I'm not sure if you mean you use C# or you use IAR with an M4 emulator? or gcc x86 C? QEMU arm v7?
WWW.EATYOURGUITAR.COM <---- MY DIY STUFF

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Tue Jun 30, 2020 11:20 am

EATyourGUITAR wrote:
Mon Jun 29, 2020 5:51 pm
in embedded programming you don't always get to choose if you will exclude C++. CMSIS is used in pretty much all ARM microcontrollers unless you try pretty hard to avoid it. CMSIS is C++. there is something you see a lot in embedded C. extern c plus plus.

https://embeddedartistry.com/blog/2017/ ... -extern-c/

there is a lot that can go wrong with a arm cortex m3 m4 when using the compiler to generate the assembler but also directly reading and writing registers. R7 through R12 can have side affects or you can just use them as globals. you can use them as temps if you always use them as temps. R0 through R5 get backed up and restored when a sub routine is called. you could in theory use them as globals but the compiler will generate code that uses R0 through R3 with absolutely no respect for your data that you have stored there. this leaves R4 and R5 as the only safe private transient (not persistent) register storage. R6 through R12 you must be consistent how you use these globals. if they are garbage storage then they must always be garbage storage for that one register. the good news is that you can use bit fields or half words when using registers to store data. in that way, you actually have a lot more to work with. the code can be very difficult to read though if everything is just ADD R6, R7, R8. easy to forgot what R6 holds without a name.
Thanks, this is very interesting but (unless I totally missed the point) it seems to be advanced optimisation stuff for me, and I am still very very far from it. Remember I am making a prototype for myself, so the most important thing is that it 100% works in practice even if it is not optimised like a final sold product. I am foremost interested in programming oscillators, envelopes, fx, etc, not in dealing with computer- or electronics-related low level stuff (ie drivers, and driver APIs), otherwise that seems no more simple than using a FPGA system to me (especially because it's difficult to find good documentation or tutorials about it, unlike FPGAs about which I found some immediately just browsing the Internet).
At the beginning, I will rely on open source audio APIs, drivers and general hardware management software, no matter C or C++, and just have the audio callback function call my own program.
But later, if I really use a Cortex M (which is not sure at all, the Cortex A series seem great too) yes I will remember all this

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Tue Jun 30, 2020 11:33 am

emmaker wrote:
Mon Jun 29, 2020 6:01 pm
Guess what I was really trying to say is make an abstraction layer for your audio input/output for portaudio or rtaudio if you use it on the host and don't call them directly. When you pick your target (Daisy, Teensy, .... ) write the abstraction layer for your target.
All right, actually that's what I intended to do after reading your second post, sorry for the confusion (I just wanted to be sure)
If there are a lot of common parameters between the host and target audio interfaces it's better to use macros to define the audio abstraction layer then hard code. If you hard code your abstraction code there's going to be subroutine calls to it and then the audio code used. With macros the code is embedded inline which is faster but harder to debug if you need to do that. But then it's hard to say if you'll really gain anything speed wise. Really depends on the processor, compiler and libraries.
For my project it should be easy to check, thanks for this tip

User avatar
EATyourGUITAR
has no life
Posts: 4812
Joined: Tue Aug 31, 2010 12:24 am
Location: Providence, RI, USA

Re: digital modular synth - hardware questions

Post by EATyourGUITAR » Tue Jun 30, 2020 11:38 am

Exactly this is all advanced stuff I would never throw at a beginner. Even though I understand it, I would not want to do it that way. If you start programming arm in C using some working example code as a start that would be the best way. Forget FPGA. Writing your own API even if it is a very basic one, may not be the best way for a beginner. This is the job of the Daisy developer to provide users with libs and API. You will inevitably need to interface peripherals so there will be driver calls somewhere. How easy or difficult they are to use can vary. Compare Arduino SPI to ST SPI for cortex M3. If you could write an API that looks like Arduino API then you could avoid using the ST SPI library but you still need to use the ST calls when building the API. Crawl before you can walk. Start with Arduino, then abandon Arduino, then do it again with ST SPI driver. Then stop. If you want to put in the extra work to make an API that sits on top then you can but this is after you already have a working product so I don't know if that helps.
WWW.EATYOURGUITAR.COM <---- MY DIY STUFF

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Tue Jun 30, 2020 12:07 pm

EATyourGUITAR wrote:
Mon Jun 29, 2020 8:43 pm
for a beginner, don't you think it would be easier to start over than to write a library that fits an existing API that is in fact cross platform? all while at the same time writing the application.
My idea is writing audio and control software modules (represented by C or C++ functions) that can be called (or, say, "driven") by an open source or at least ready-to-use audio and MIDI API that converts the maths from my modules to realtime sound.
What Jay / emmaker suggested was to not call a specific API directly, but instead add an intermediary software layer (can be a function or pre-processor instructions) that helps switching between different APIs, the goal of this being that early tests can be done on a PC using a PC audio API like portaudio, instead of programming the hardware too many times until it does not work anymore.
think beginner level. at that level they don't know what microcontroller to select.
It is highly likely to be either the Daisy system or the Bela system (for practical reasons)
they don't know what book to read.
True, but the worst was I had lots of trouble finding any useful information about turning C/C++ maths into realtime audio on any platform. The first valuable things I found were the portaudio and rtaudio code examples and documentation (and same for the Bela system later)
they don't always know the basics. something like the daisy would be perfect if and when it actually has a huge community with lots of code examples. 480MHz is pretty serious. when you figure out that you can use DMA with external RAM then you really get an advantage for clock cycles. you could argue that this effectively increases RAM bandwith and reduces RAM latency. some microcontrollers can DMA the ADC and DAC also. if this is the case then you probably don't want to write your own hardware abstraction layer. the off the shelf libraries are probably way more efficient when compiled. DMA can be vendor specific.
I have just discovered the Daisy board, so I need to get more into it.
But I know the Bela system better and it already does all what you say + everything else required (even GUIs now). It is a Beaglebone Black with a firm-realtime Xenomai extension that allows to run audio programs with higher priority than the Linux kernel, plus it uses a PRU for I/O management, so that the overall latency and the jitter are very low and all I/O are always synchronized to the same clock. All software that deals with low level hardware comes as optimized open source libraries. More details here:
https://bela.io/about.html
http://eecs.qmul.ac.uk/~andrewm/mcpherson_aes2015.pdf
https://github.com/BelaPlatform/Bela/wiki
https://forum.bela.io/d/903-details-for ... for-bela/2
https://forum.bela.io/d/20-pru-vs-mcasp
the only reason to write your own hardware abstraction layer would be if you want %100 compatibility when porting code between different architectures. if you had a huge pile of existing code that needed to be cross-compiled for example. rewriting the lib avoids emulating the architecture in detail using less efficient methods. I agree that if this is all done in macros then great. if this is not done in macros then maybe not great. however some things must be done with interrupts so it becomes a subroutine even if it is inline inside an ISR. that would be an example of no difference in performance.
Actually my project is very simple, and I think it would be simple to use exactly the same module library + the same module patcher (the function that deals with module connections and rendering order) for both a computer (ASIO, CoreAudio, etc) and a dedicated board (Daisy, Bela, etc)

[edited]

User avatar
felix le chat
Veteran Wiggler
Posts: 683
Joined: Tue Aug 10, 2010 5:38 am

Re: digital modular synth - hardware questions

Post by felix le chat » Tue Jun 30, 2020 12:19 pm

EATyourGUITAR wrote:
Tue Jun 30, 2020 11:38 am
Exactly this is all advanced stuff I would never throw at a beginner. Even though I understand it, I would not want to do it that way. If you start programming arm in C using some working example code as a start that would be the best way. Forget FPGA. Writing your own API even if it is a very basic one, may not be the best way for a beginner. This is the job of the Daisy developer to provide users with libs and API. You will inevitably need to interface peripherals so there will be driver calls somewhere. How easy or difficult they are to use can vary. Compare Arduino SPI to ST SPI for cortex M3. If you could write an API that looks like Arduino API then you could avoid using the ST SPI library but you still need to use the ST calls when building the API. Crawl before you can walk. Start with Arduino, then abandon Arduino, then do it again with ST SPI driver. Then stop. If you want to put in the extra work to make an API that sits on top then you can but this is after you already have a working product so I don't know if that helps.
(yes, see my message above)

But why start with Arduino when I know it will never be used as the final hardware, and I find Daisy and Bela more simple to use and more adapted to my project?

tele_player
Common Wiggler
Posts: 191
Joined: Fri Feb 15, 2019 10:50 am
Location: Sacramento

Re: digital modular synth - hardware questions

Post by tele_player » Tue Jun 30, 2020 4:43 pm

Don’t rule out Teensy 4.1. Daisy is still mostly promises.
Teensy 4.1 is fast, has a working audio library, is easily connected to midi and DA converters, and is available now.

I’d check on the Teensy forum to see if anybody has gotten Pd patches, compiled with heavy, onto Teensy.

Post Reply

Return to “Music Tech DIY”