MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

Fragment, collaborative additive/granular spectral synth
MUFF WIGGLER Forum Index -> Music Software  
Author Fragment, collaborative additive/granular spectral synth
fsynth
Hello everyone,

i am making an open-source cross-platform synthesizer since ten months which combine audio and visuals.

It is available here : https://www.fsynth.com



A playlist showing most features is available here : https://www.youtube.com/playlist?list=PLYhyS2OKJmqe_PEimydWZN1KbvCzkjg eI

The audio is produced by first generating/manipulating the visual content on the GPU in real-time with a GLSL script (which multiple peoples can edit in real-time from the software), the user can then slice any pieces of the visual content (which is actually the spectrum), the slices content (pixels) are then sent to a synthesis server/engine which interpret the data to produce sounds with either an additive synthesizer or granular synthesizer.

The synthesizer accept multiple data inputs such as images, webcam and canvas (which is just a surface to draw on with the mouse), the main synthesis engine is additive/granular, the granular stuff is a work in progress, it is also able to do re-synthesis.

Since slices content (pixels) can be sent over the network to any programs, many weird things can be done with the data like feeding external instruments, i only scratched the surface of this and focus on the synthesis and accessibility aspect right now.

This synthesizer can also be used to do regular visuals alongside audio synthesis.

This was done because i always wanted something akin to MetaSynth but with a real-time approach, without buying a Mac and a bit more modular, this is highly inspired by Virtual ANS and the ANS synthesizer itself as well.

The client is made with web technology and work best out of the box in Chrome at the moment although it also run well with Firefox if you use the audio server.
lodsb
awesome!!
rikardjh
Awesome! Looks / sounds like a really cool concept but has a fairly steep learning curve?

Would be great if you could play notes on your computer-keyboard and not having to plugin midi. Or am I missing something? (don't have a midi-keyboard atm and couldn't get any sound out of it)
fsynth
The learning curve is steep, yeah, the only thing required is to know a bit of GLSL and some pre-defined functions/variables to draw lines and do things with time (GLSL is the programming language to instruct your GPU to draw stuff), to me it feel natural of course but learning GLSL even if it is a simple language with a simple concept is steep! The advantage of programming the GPU is that it is very fast at doing whatever manipulation or compositing you want on pixels in real-time and it can do visuals alongside audio also even if i don't use this feature. smile

Playing notes on a computer keyboard is something planned but relatively low on my todo list smile right now it is either MIDI or by generating visuals (either way visuals need to be generated to produce any sounds)

I think you didn't add slices, that is why you don't get any sounds, to add one, you right click (maintain touch if on touch device) on the canvas (the big black area at the top of the toolbar) and click on the + icon, the example when you join a new session should at least produce a 440hz tone if you have at least one slice.

Here is a sample session with a running sequence + MIDI (optional) with 4 slices, this session require a good computer however (and maybe Chrome browser!) as it is heavy, this also show multitimbrality (4 timbres) :

https://www.fsynth.com/app/liuamj3t7pd

This session is based on harmonics, the bare code just draw multiple horizontal lines, the rest is processing on the lines to attenuate higher frequencies and modulate them with various parameters, as you can see, the code is just some math formulas thrown together, in fact the parameters/math functions can be tweaked to change the timbres, that code is messy anyway, that is the result of tonight jam session. wink

One of the main limitation of Fragment right now is granularity as it is bounded to the display refresh rate (usually 60fps so ~16.6ms of events granularity), this can be an issue for some sounds unfortunately, i am making another synth based on more or less same concept but it does not require programming, it will address the issue while making it flexible and easy.

Btw, anyone tried with Safari browser ? I still don't know if it work on this browser so if someone can confirm... thank! smile
rikardjh
Cool. Thanks for the reply. I do know some GLSL, just thought in general for the user. Always easier /more fun if you have something interesting to start with and sort of deconstruct. At least for me.

Will check the video out and give it another shot!
fsynth
Well, you have a sample code when you start with basic harmonics setup and filtering.

Anyway, i have updated the post with a link to a video showing live additive synthesis patches at 150fps, the source-code of this patch is available here and as you can see is just a (quite simple) derivation from the sample starting code except it has 2 voices.

Since then the synthesizer now support 32-bits float pixels value which mean much higher range for oscillators amplitude values and it also support individual output for visuals and synthesis which mean something like this is now possible.

There is also a new launcher to start the app + external program and configure it with a GUI but have to find time to package it, this should help peoples starting. smile
fsynth
Hello,

Fragment is now one year old and to celebrate this, a massive 1.0 anniversary update was published along with new documentation, tools, homepage and videos.

To make it easier for newcomer to try it out with maximum performances, a launcher is available on the homepage, the launcher bundle the audio server and is an easy to use gateway to use the web application with the audio server.

Here are some new videos :




Here is the client changelog :

  • complete granular synthesis support, asynchronous and synchronous (audio server only)
  • videos import with image conversion of the audio part and playback/looping settings
  • note lifetime : The ability to fully handle the release part of notes, a part which was a bit cumbersome before
  • OSC IN : GLSL uniforms can be controlled through OSC, videos can be controlled through OSC
  • OSC OUT : The notes data can be sent through OSC bundles (a SuperCollider additive/spectral synth was made as a demonstration of the possibilities)
  • distributed sound synthesis multi-machines/multicore with fas_relay tool and audio server instances
  • new easy to use launcher tool for the audio server/app. this is distributed on the homepage with a nice .deb packages for Linux distributions
  • controllers were removed due to OSC IN support (i recommend using the Open Stage Control software)
  • new documentation made with mkdocs
  • new homepage
  • many optimizations
  • phase information kept in alpha channel of imported textures
  • direct support to link the audio server and the application through the session URL (append ?fas=1 URL arg)
  • more analysis window (kaiser, nutall, flattop) along with alpha parameter for kaiser, gauss and blackman
  • the audio server CPU stream load is now displayed in Fragment
  • facelift of the help dialog with code snippets (ADSR etc), about tab and click-to-copy clipboard feature
  • the demo code was updated
  • fix decimals issue with input widget
  • fix inputs save/restore
  • update ShaderToy conversion feature
  • add fmov pre-defined uniform (video input playback time access from the fragment shader)
  • improved notification system/GLSL compilation report

The Fragment Audio Server also get some new specific features and major fixes :
  • improved sound quality through configurable linear interpolation factor, this was static before and contributed to muddy sound quality
  • enhanced stability, all known stability issues were fixed
  • no more pop sound on first connection with additive synthesis
  • hot-reload of samples
  • grains pitch can be detected from patterns in the filename (MIDI note or user-defined frequency)
  • improved bandwidth- enhanced oscillators
  • many other minor improvements

All planned features were implemented, the future is made of more synthesis methods, improved collaborative features along with improved ease of use and optimizations. smile
MUFF WIGGLER Forum Index -> Music Software  
Page 1 of 1
Powered by phpBB © phpBB Group