AlgoCompSynth One Design Notes

algocompsynth synthesizers goals physical modeling digital signal processing machine learning Linux audio plumbing web servers live coding package managers books

What I’m building - goals and some design decisions

M. Edward (Ed) Borasky https://twitter.com/znmeb
03-19-2021

Project goals

Let me first emphasize that this is a hobby project. I have no plan for this to become a commercial product. The challenge I’ve set for myself here is enough to keep me engaged and productive without adding the chores of accounting, sales, marketing, competitors, intellectual property attorneys and all the other “boring business shit that doesn’t do itself.”

The overarching goal is to build the best digital music synthesizer I can within a hardware budget of $500US. I’m slightly over that because I’ve used a larger microSD card than I originally planned - 512 gigabytes. I’m sure the space will be used.

A sub-goal of that is to catch up on the technology. My last serious attempt at computer music was in 2004 at Professor David Cope’s WACM algorithmic composition workshop (Marshall 2005). Before that was a microtonal festival in El Paso in 2001, and before that was a Commodore 64 in 1986.

GPUs are real, and they’re spectacular. If there’s something besides an NVIDIA Jetson Xavier NX with the same compute power and a fully-supported Ubuntu Linux desktop for $399US, I haven’t found it. I plan to build a synthesizer that can keep 384 CUDA cores busy making interesting experimental music in real time.

Non-goals

  1. Commercial success - see above.
  2. Musical interfaces: Every musician has their own idea of what interface they want. Keyboard players want keyboards, woodwind players want breath controllers, music producers want music production boxes, sound designers want knobs and visualizations, Eurorack musicians want patch panels and accessible control interfaces, and nobody wants digital audio workstations.

AlgoCompSynth One will support the MIDI Polyphonic Expression (MPE) and Open Sound Control standards, plus live coding. It’s a synthesizer, not a studio!

Design: system configuration

A Linux system can be configured in two main configurations: desktop and server. The Jetson Xavier NX JetPack SDK ships with a full Linux desktop, so that’s available with no effort.

However, the desktop uses RAM, requires an attached keyboard, pointing device and display, and presents a learning curve to musicians coming from Apple or Microsoft platforms. For these reasons, AlgoCompSynth One will be configured as a server.

As I noted in a previous post (Borasky 2021), one of the inspirations for this project is the Critter and Guitari Organelle M. In particular, the Organelle M can act as a web server on a WiFi network (Critter and Guitari 2021).

A web server has many advantages, but the main one is that it can be accessed from any device with a web browser and sufficient screen real estate. For the initial deployment I’ll be using JupyterLab (Jupyter 2021) as the web server. JupyterLab is widely used among data scientists, and it allows access to most of the amazing NVIDIA tutorials and demos for the Jetson platform.

Design: how much of Ubuntu Studio to use?

The Jetson repositories contain nearly all of Ubuntu Studio 18.04. While this is tempting - there are DAWs, softsynths, live coding environments and even drum machines. But there are three disadvantages:

  1. Most of the software requires a desktop, and AlgoCompSynth One won’t have one,
  2. Little if any of it can take advantage of the GPU, and
  3. Ubuntu Studio 18.04 is three years old and is nearing end-of-life.

So I’ll be using some low-level tools that run from the command line from Ubuntu itself, but application software I’ve written myself or installed from upstream open source projects will provide the bulk of the functionality.

Design: synthesis technology

The approaches I’m planning to use draw heavily on classical digital signal processing, as elegantly adapted for musical purposes by Professor William A. Sethares (Sethares 2005) / (Sethares 2007), and Professor Julius O. Smith (Smith 2011). I also plan to incorporate physical modeling (Smith 2010).

The NVIDIA cuSignal library (cusignal2021a) will provide the core signal processing capabilities. cuSignal is a GPU-optimized API that closely mirrors the Python scipy.signal library (SciPy.org 2021). And cuSignal ships with a number of tutorial IPython notebooks.

AlgoCompSynth One will also include R and the R audio packages documented in (Sueur 2018). RStudio Server will not be included; however, the Jetson Xavier NX can run the edgyR image (https://hub.docker.com/r/edgyr/edgyr).

Design: deep learning environments

A number of exciting projects in experimental music have been built on top of the two major deep learning tools, TensorFlow and PyTorch. NVIDIA provides Jetson-optimized versions of both and they’ll be on the first release of AlgoCompSynth One.

Design: live coding environments

To get the maximum flexibility possible, AlgoCompSynth One will support several live coding environments. At the moment, the plan is to provide Sonic Pi and Tidal Cycles in the first release. Sonic Pi is widely used and extremely well documented. Tidal Cycles is less well known but produces some music I find quite interesting.

Design: package / environment managers

AlgoCompSynth One will include at least

  1. Miniforge (NumFocus 2021): This provides the Conda package and environment manager. JupyterLab, cuSignal and the R audio utilities will be deployed here.
  2. Virtualenv (PyPA 2021): This will provide isolation for PyTorch and TensorFlow. NVIDIA packages them as Python wheels, so they aren’t compatible with Conda’s philosophy.
Borasky, M. Edward (Ed). 2021. “AlgoCompSynth by Znmeb: I Can’t Find a Synth i Like, so i’m Building My Own.” https://www.algocompsynth.com/posts/2021-03-13-i-cant-find-a-synth-i-like-so-im-building-my-own/.
Critter, and Guitari. 2021. “Joining Existing WiFi Network.” https://www.critterandguitari.com/manual?m=Organelle_M_Manual#53-joining-existing-wifi-network.
Jupyter. 2021. “"JupyterLab Documentation".” https://jupyterlab.readthedocs.io/en/stable/.
Marshall, Andrew. 2005. “Workshop in Algorithmic Computer Music 2004.” Computer Music Journal 29 (2): 77–78.
NumFocus. 2021. “"Signal Processing (Scipy.signal)".” https://github.com/conda-forge/miniforge.
PyPA. 2021. “"Virtualenv".” https://virtualenv.pypa.io/en/latest/.
SciPy.org. 2021. “"Signal Processing (Scipy.signal)".” https://docs.scipy.org/doc/scipy/reference/signal.html.
Sethares, W. A. 2005. Tuning, Timbre, Spectrum, Scale. Springer London.
———. 2007. Rhythm and Transforms. Springer London.
Smith, Julius O. 2010. Physical Audio Signal Processing. http://ccrma.stanford.edu/~jos/pasp/.
———. 2011. Spectral Audio Signal Processing. http://ccrma.stanford.edu/~jos/sasp/.
Sueur, J. 2018. Sound Analysis and Synthesis with r. Use r! Springer International Publishing. https://books.google.com/books?id=zfVeDwAAQBAJ.

References

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY-SA 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Borasky (2021, March 19). AlgoCompSynth by znmeb: AlgoCompSynth One Design Notes. Retrieved from https://www.algocompsynth.com/posts/2021-03-18-algocompsynth-one-design-notes/

BibTeX citation

@misc{borasky2021algocompsynth,
  author = {Borasky, M. Edward (Ed)},
  title = {AlgoCompSynth by znmeb: AlgoCompSynth One Design Notes},
  url = {https://www.algocompsynth.com/posts/2021-03-18-algocompsynth-one-design-notes/},
  year = {2021}
}