0110.be logo

~ USB MIDI interface for the NeXTCube - ISPW board

I have recently restored a NeXTCube with an ISPW Ariel soundcard with the aim to put it in the hands of artists and researchers in the context of a living electronic music instrument heritage project. To make the cube talk to keyboards, synths or other audio workstations I have built a MIDI interface for the NeXTcube.


Fig: the NeXTCube with the Ariel ProPort and MIDI input/output interface.

Recently, I was able to restore a NeXTCube and install an early version of MAX – a graphical music programming environment. However, a crucial part of the system was missing: there was no way to do MIDI input/output. MIDI is used to connect controllers, keyboards, synthesizers or other musical instruments to the audio workstation. The NeXTCube itself has a serial port which allows users to connect MIDI devices. Next to the serial port on the mainboard, the NeXTCube I am working with also has a RS-422 serial port on the ISPW ‘soundcard’. The serial port uses RS-422 and mini DIN 8 connectors which provide MIDI input and output. While the MIDI data bytes are transmitted according to spec, the connector and the electrical signals are not compatible with standard MIDI.


Fig: the IRCAM/Ariel ISPW soundcard with mini DIN-8 RS-433 serial port on the right.

For MIDI I/O we need a device which allows to connect the RS-422 MIDI to both legacy MIDI devices and to computers via USB MIDI. If a MIDI event arrives from the NeXTCube’s RS-422 it needs to be passed through to the USB and legacy MIDI ports and the other way around. The Teensy platform is ideal: it supports hardware serial and USB MIDI. In this retro-computing project, it seems wasteful to use the 600MHz Teensy 4.0 only for message passing: the Teensy has much more computing power than NeXTcube but it is cheap, easy to program, available and practical.

The RS-422 serial port uses -6V to 6V logic which needs to be transformed to the 0V to 3.3V logic for the Teensy microcontroller. A PCB provides this capability and is connected to a hardware serial port of the Teensy. The pinout of the RS-422 port was measured via a scope and matched the documentation. The Teensy has an usbMIDI mode and can present itself as a standard MIDI device to a PC. Two opto-isolated legacy MIDI DIN-5 ports were connected to another hardware serial port. The software on the Teensy conducts the three-way MIDI message passing.

Vid: Max/FTS FM synth reacting to USB MIDI input.

The electronics were fixed into a reused metal enclosure. The front panel of the enclosure was replaced by a custom 3D printed panel. The front contains the RS-422 port, two MIDI DIN 5 ports and a micro usb port either for power alone or MIDI messages and power. Feel free to check out the OpenSCAD design with a level MINI DIN8 hole.

With a working MIDI interface for the NeXTcube allows interfacing with MIDI keyboards and controllers. It can also be used to measure roundtrip latency. MIDI to sound latency determines how long it takes between pressing a MIDI key and hearing sound. MIDI to MIDI roundtrip latency determines how long it takes to process, parse and return a MIDI message. For a responsive, reliable system both types of latencies should be constant and preferably in the range of 10ms or below.


Fig: Measured MIDI roundtrip latency on the ISPW board for the NeXTCube.

Measuring the MIDI roundtrip latency shows that the system is able to respond in 3.6+-0.4 ms (N=300). A combination of a MAX patch and Teensy firmware was used to measure this automatically. The MIDI-to-audio latency was measured a few times manually and always was around 13ms. These figures show that the system is ideal for low-latency real-time music making in its default configuration. In MAX the audio buffer sizes could be reduced to achieve an even lower latency but with the risk of running into buffer underruns and audio glitches.

Small discussion the USB MIDI interface for the NeXTCube on Hackaday


~ Electronic Music and the NeXTcube - Running MAX on the IRCAM Musical Workstation

The NeXTcube is an influential machine in computing history. The NeXTcube, with an additional soundcard, was also one of the first off-the-shelf devices for high-quality, real-time music applications. I have restored a NeXTcube to run an early version of MAX, an environment for interactive music applications.

The NeXTcube context and the IRCAM Musical Workstation

In 1990 NeXT started selling the NeXTcube, a high-end workstation. It introduced or brought together many concepts (objective-c, the Mach kernel, postscript, an app store) which are still in use today. The NeXTcube’s influence is especially felt in the Apple ecosystem with Mac OS X, iPhones and iPads being direct decedents of NeXT’s line of computers.

Due to its high price, the NeXTcube was not a commercial success. It mainly ended up at companies or in the hands of researchers. Two of those researchers, Tim Berners-Lee and Robert Cailliau created the first http server and web browser at CERN on a NeXTcube. Coincidently, the http software was publicly released exactly 30 years ago today. Famously, the cube was also used to develop games like the original Doom and Quake. So yes, the NeXTcube runs Doom.


Fig: the NeXTcube’s design stood out compared to the contemporary beige box PCs.

Less well known is the fact that the NeXTcube is also one of the first computing devices capable enough for real-time, high-quality interactive music applications. In the mid 1980s this was still a dream at IRCAM, a French research institute with the aim to ‘contribute to the renewal of musical expression through science and technology’. The bespoke hardware and software systems for music applications from the mid 80s were further developed and commercialised in the early 90s. Together these developments resulted in a commercially available version of the “IRCAM Musical Workstation (IMW)”, an early, if not the first, off-the-shelf computer for interactive music applications.

The IRCAM Musical Workstation (IMW), sometimes called the IRCAM Signal Processing Workstation (ISPW), consisted of several hard and software modules working together to enable interactive music applications. An important component was a ‘soundcard’ which had two beefy 40MHz i860 intel CPUs for DSP. When installed in the NeXTcube, the soundcard had more computing power than the rest of the computer. This is similar to modern computers where some graphics cards have more raw computing power than the main CPU. The soundcard was developed at IRCAM and commercialized by Ariel inc. under the name “Ariel ProPort”.


The IRCAM Ariel DSP coprocessor, soundcard.

A few software environments were developed at IRCAM which made use of the new hardware. One was Animal, another, was the much more influential MAX. MAX provides a graphical programming environment specific for music applications. Descendants of MAX are still used today, see Ableton Max for Live and Pure Data. I consider the introduction of MAX as a pivotal point in electronic music history. Up until the introduction of MAX, creating a new electronic music instrument meant bespoke hardware development. With MAX, this is done purely in software. This made electronic sound or instrument design not only faster but also accessible to a much wider audience of composers, artists and thinkerers.

The NeXTcube at IPEM

IPEM was an early electronic music production studio embedded at Ghent University, Belgium. Now it is active as a internationally acclaimed research center for interdisciplinary music research. In the early 90s IPEM acquired a NeXTcube Turbo with an internal diskette drive, SCSI hard disk, NextDimension color graphics card and an Ariel ProPort DSP/ISPW module. The cube was preserved well and came with many of the original software, books and manuals. I have been trying to get this machine working and configure it as an “IRCAM Musical Workstation”.


IPEM’s NeXTcube with IRCAM Ariel ProPort.

There were a few practical issues: the mouse was broken, the hard drive unreliable and the main system fan loud and full of dust. The mouse had a broken cable which was fixed, the hard drive was replaced by a SCSI2SD setup and the fan was replaced with a new one. On the software side of things, the Internet Archive hosts NeXTStep 3.3 which, after many attempts, was installed on the cube. Unfortunately there seemed to be a compatibility issue. The Ariel ProPort kernel module did not work. I started over installed NeXTStep 3.1, with the same result. Finally, I installed NeXTStep 3.0 which was compatible with the kernel module and MAX/FTS!

Vid: Max/FTS with a commercial Ariel soundcard running on a NeXTcube Turbo.

The restoration of the IRCAM Signal Processing Workstation instruments fits in a university project on living heritage The idea is to get key historic electronic music instruments into the hands of researchers and artists to pull the fading knowledge on these devices back into a living culture of interaction. This idea already resulted in an album: DEEWEE Sessions vol. 01. Currently the collection includes a 1960s reverb plate, an EMS Synti 100 analog synthesizer from the 70s, a Yamaha DX7 (80s) and finally the NeXTCube/ISPW represents the early 90s and the departure of physical instruments to immaterial software based systems.

Acknowledgements & Further reading

This project was made possible with the support of the Belgian Music Instrument Museum and IPEM, Ghent University. I was fortunate to get assistance by Ivan Schepers and Marc Leman at IPEM but also by the main developers of MAX: Miller Puckette. I would also like to thank Anthony Agnello formerly at Ariel Corp for additional image material and info. I also found the WinWorld and NeXTComputers communities and resources extremely helpful. Thanks a lot!


See also the discussion on this article at Hacker News
Lindemann, E., Dechelle, F., Smith, B., & Starkier, M. (1991). The Architecture of the IRCAM Musical Workstation – Computer Music Journal, 15(3), 41–49.
Puckette, M. (1991). FTS: A Real-Time Monitor for Multiprocessor Music Synthesis. Computer Music Journal, 15(3), 58–67.
Puckette, M. 1988. The Patcher, Proceedings, ICMC. San Francisco: International Computer Music Association, pp. 420-429.
Puckette, M. 1991. Combining Event and Signal Processing in the MAX Graphical Programming Environment. Computer Music Journal 15(3): 68-77.




~ An Arduino Trigger Box

Vid: the trigger box set in recording mode via a button or a MIDI key press.

A while back I have build a trigger box. Such device can be used for various synchronisation tasks. It can be used to synchronise camera’s, capture devices and sensors. All compatible devices have a 5V TTL input, often a BNC connector. For a camera, TTL input could control the shutter time. For a sensor a TTL clock could determine the sample time or simply be registered along side an other data stream. The trigger box allows to either pass-through (or block) an incoming TTL clock. It also outputs a recording level.

There are two ways to use the trigger box. The first is by operating a manual switch to start (and later stop) a recording. When recording, the recording level output is set to 5V and the clock at the CLOCK IN is passed through to the CLOCK OUT port. The second way to set the recording state is by MIDI over USB. While a MIDI key is pressed, the recording state is high, when the key is released the state is low. The MIDI key input makes it compatible and controllable from any DAW. Both ways are shown in the video.

For practical reasons there are two microcontrollers in the device, a Teensy 3.2 and an Arduino. The Arduino is there for its 5V capabilities and is essentially a rather beefy level-shifter. The Teensy is there for the USB MIDI compatibility and controls everything.

For aesthetic reasons the trigger box has been build into a 1950s ‘Sieger portable explosive gas detector’. I did not feel too bad about gutting the original electronics since a battery leak had destroyed most of it. Also, the late WII era knobs are still unmatched for durability and tactile satisfaction.

The code running on the microcontrollers and some documentation can be found on the Trigger Box github repository.


~ Gabber - Visualizing constant-Q transform in the browser

Today I have released Gabber a bit of code to transform audio from the time domain to a constant-Q frequency domain in the browser. To be more precise it does a constant-Q non-stationary Gabor transform. The heavy-lifting is done by ‘the gaborator’, a C++ library by Andreas Gustafsson.

I have compiled the Gaborator library to WebAssembly, added some glue code to bridge the Javascript and WASM worlds, implemented a Web Audio AudioWorkletProcessor to transform audio in the background and and finally visualized the results via WebGL2.

There is a Gabber live demo below. If you press start and grant microphone access, incoming audio is transformed and plotted onto a canvas. Thanks to WebAssembly and WebGL2 this should run relatively smoothly even on less powerful devices. Please do play around with the perspective slider.



While Gabber is currently a proof of concept, with some attention the library could be used as a front end for browser based music information retrieval applications. My main goal with Gabber is to use it in educational settings to explain the properties of sound, and more concretely pitch, via spectrograms and interactive demos. Also I plan to use it in a browser based tool to extract pitch patterns from music.

The name Gabber, refers to both Dennis Gabor and an infamous, mainly Dutch style of music which was popular when I was young. More information can be found in the GitHub repository: Gabber-High resolution spectral transforms for the web.


~ Pitch content on historic field recordings

Wednesday the 15th of February I presented a collaboration with Olmo Cornelis titled Pitch content on historic field recordings: Analyzing 400 wax cylinder recordings at ULB, Brussels. The session was organized by the Belgian branch of ICTM

The presentation is linked below and is interactive: it contains music analysis software being discussed within the presentation itself.





Thanks to ICTM Belgium for the invitation and the platform! The Ghent University BOF funded project PaPiOM allows me work on these topics.


~ DiscStitch at Deezer HQ

I have presented DiscStitch at the MIR get together at the Deezer headquarters in Paris.

DiscStitch is a solution to identify, align and mix digitized audio originating from (overlapping) laquer discs. The main contribution lays in the novel audio to audio alignment algorithm which is robust against some speed differences and variabilities.

Below a presentation can be found introducing DiscStitch. You can also try out the browser based DiscStitch audio-to-audio alignment page.


~ Updates for Olaf - The Overly Lightweight Acoustic Fingerprinting system

Olaf
Fig: Olaf fingerprinter.

I have updated Olaf – the Overly Lightweight Acoustic Fingerprinting system. Olaf is a piece of technology that uses digital signal processing to identify audio files by analyzing unique, robust, and compact audio characteristics – or “fingerprints”. The fingerprints are stored in a database for efficient comparison and matching. The database index allows for fast and accurate audio recognition, even in the presence of distortions, noise, and other variations.

Olaf is unique because it works on traditional computing devices, embedded microprocessors and in the browser. To this end tried to use ANSI C. C is a relatively small programming language but has very little safeguards and is full of exiting footguns. I enjoy the limitations of C: limitations foster creativity. I also made ample use of the many footguns C has to offer: buffer overflows, memory leaks, … However, with the current update I think most serious bugs have been found. Some of the changes to Olaf include:

Anyhow, what originally started as a rather quick and dirty hack has been improved quite a bit. The takeaway message: in the world of software it does seem possible to polish a turd.


~ mot - MIDI and OSC Tools - Sending UDP messages from the browser

As a way to get to know the Rust programming language I have developed a couple of practical tools for OSC and MIDI debugging. OSC and MIDI are protocols which are almost always used for applications dealing with music. In these applications latency should be kept in check. Languages with garbage collection (Java, Go) and scripting languages (Ruby, Python, …) are hard to tune for low-latency applications and do not really have real-time guarantees. Rust, as a modern alternative for C/C++, is a better fit for cross platform CLI low-latency applications.

The MIDI and OSC Tools are bundled in a single CLI application. The application includes:

This opens a couple of possibilities which are discussed below.

Sending UDP messages from the browser

One of the ways to send OSC messages from a browser to a local network is by using the MIDI out capability of browsers and – using mot – translating MIDI to OSC an example can be seen below.




Fig: sending an UDP message to a network from a Browser using a the mot MIDI to OSC bridge, click the image for a better readable version.

Measuring UDP message latency

Both MIDI and OSC can be seen as rather general data encapsulation protocols with wide support in terms of libraries and cross platform support. Their value goes beyond mere musical applications. The same holds for mot. In this example we are using mot to measure UDP message latency between two hosts.

On the first host we send MIDI messages from MIDI device 0 over OSC to another host with e.g. mot midi_to_osc 192.168.1.12:3000 /m 0. At the other host we receive the OSC messages and send them to a virtual device: mot osc_to_midi 192.168.1.12:3000 /m 6666.

At the second host we return messages from the virtual device to the first host: mot midi_to_osc 192.168.1.4:5000 /m 1. Perhaps you first need to do mot midi_to_osc -l to find the index of the virtual device. As a final step the messages can be received at the first host and returned to the original midi device. On the first host: mot osc_to_midi 192.168.1.4:5000 /m 0.

If the original MIDI device is a Teensy running the “roundtrip patch” then finally the roundtrip time is accurately measured and shown in the serial console. I am sure the previous text is cromulent, totally not contrived and not confusing. Anyway, to make it more confusing: this is what happens when you use a single host to do midi to osc to midi to osc to midi and use the loopback networking device:




Fig: MIDI to OSC to MIDI to OSC to MIDI roundtrip latency.

Visualizing sensor data in the browser




Fig: Sensor data as MIDI.

When capturing sensor data on microcontrollers, data can be encoded into MIDI. This makes almost any sensor practically useful in Ableton Live or similar environments. It also makes it compatible with all other MIDI supporting devices. With mot it becomes trivial to send MIDI encoded sensor data over OSC e.g. to a central place to log that data.

Another use case is to visualize the incoming data in real-time. A single web page which reads and visualizes incoming MIDI-sensor data becomes much more useful if streams from other devices can be visualized as well with the mot midi_to_osc and mot osc_to_midi commands.

Cross-platform support

With the Rust compiler it is relatively easy to cross-compile for different targets. There is however an important limitation in mot. Windows has no support for virtual MIDI ports which limits the usefulness of mot on that platform.

Check the mot – MIDI and OSC Tools GitHub repository for the software. Perhaps also of interest for MIDI debugging are VMPK, MIDI Monitor and the web MIDI tools.


~ Attempting humor in academic writing

Screenshot of a browser based pitch organization extraction tool
Fig: Advances in Speech and Music Technology book cover.

I have recently published an chapter in an academic book published by Springer. The topic of the book is of interest to me but can be perceived as rather dry: Advances in Speech and Music Technology.

The chapter I co-authored presented two case studies on detecting duplicates in music archives. The fist case study deals with segmentation reuse in an archive of early electronic music. The second with meta-data reuse in an archive of a public broadcaster containing digitized commercial shellac disc recordings with many duplicates.

Duplicate detection being the main topic, I decided to title the article Duplicate Detection for for Digital Audio Archive Management. It is easy to miss, and not much is lost if you do, but there is a duplicate ‘for’ in the title. If you did detect the duplicate you have detected the duplicate in the duplicate detection article. Since I have fathered two kids I see it as an hard earned right to make dad-jokes like that. Even in academic writing.

It was surprisingly difficult to get the title published as-is. At every step of the academic publishing process (review, editorial, typesetting, lay-outing) I was asked about it and had to send an email like the one below. Every email and every explanation made my second-guess my sense of humor but I do stand by it.

From: Joren
To: Editors ASMT

Dear Editors,

I have updated my submission on easychair in…

I would like to keep the title however as is an attempt at word-play. These things tend to have less impact when explained but the article is about duplicate detection and is titled ‘Duplicate detection for for digital audio archive management’. The reviewer, attentively, detected the duplicate ‘for’ but unfortunately failed to see my attempt at humor. To me, it is a rather harmless witticism.

Regards

Joren

Anyway, I do think that humor can serve as a gateway to direct attention to rather dry, academic material. Also the message and the form of the message should not be confused. John Oliver, for example, made his whole career on delivering serious sometimes dry messages with heaps of humor: which does not make the topics less serious. I think there are a couple of things to be learned there. Anyway, now that I have your attention, please do read the author version of Duplicate Detection for for Digital Audio Archive Management: Two Case Studies.


~ Updates for TarsosDSP

TarsosDSP is a Java library for audio processing I have started working on more than 10 years ago. The aim of TarsosDSP is to provide an easy-to-use interface to practical music processing algorithms. Obviously, I have been using it myself over the years as my go-to library for audio-processing in Java. However, a number of gradual changes in the java ecosystem made TarsosDSP more and more difficult to use.

Since I have apparently not been the only one using it, there was a need to give it some attention. During the last couple of weeks I have found the time to give it this much needed attention. This resulted in a number of updates, some of the changes include:

Examples of TarsosDSP
Fig: The updated TarsosDSP release contains many CLI and GUI example applications.

Notably the code of TarsosDSP has not changed much apart from some cosmetic changes. This backwards compatibility is one of the strong points of Java. With this update I am quite confident that TarsosDSP will also be usable during the next decade as well.

Please check out the updated TarsosDSP repository on GitHub.


~ The automatic HTTPS capabilities of Caddy

Caddy logo

This blog has been running on Caddy for the last couple of months. Caddy is a http server with support for reverse proxies and automatic https. The automatic https feature takes care of requesting, installing and updating SSL certificates which means that you need much less configuration settings or maintenance compared with e.g. lighttpd or Nginx. The underlying certmagic ACME client is responsible for requesting these certificates.

Before, it was using lighttpd but the during the last decade the development of lighttpd has stalled. lighttpd version 2 has been in development for 7 years and the bump from 1.4 to 1.5 has been taking even longer. lighttpd started showing its age with limited or no support for modern features like websockets, http/3 and finicky configuration for e.g. https with virtual domains.

Caddy with Ruby on Rails

I really like Caddy’s sensible defaults and the limited lines of configuration needed to get things working. Below you can find e.g. a reusable https enabled configuration for a Ruby on Rails application. This configuration does file caching, compression, http to https redirection and load balancing for two local application servers. It also serves static files directly and only passes non-file requests to the application servers.

(cachestaticfiles) {
    @staticFiles {
        file
    }
    header @staticFiles Cache-Control "public, max-age=604800, must-revalidate"
}

(railsdefaults) {
    #compress responses
    encode zstd gzip

    #redirect from http to https
    @http {
        protocol http
    }

    redir @http https://{host}{uri}

    @notStatic {
        not file
    }

    import cachestaticfiles
    reverse_proxy @notStatic localhost:{args.0} localhost:{args.1}
    file_server
}

example.com {
    root * /var/www/example.com/current/public
    import railsdefaults 10000 10001
}

If you are self-hosting I think Caddy is a great match in all but the most exotic or demanding setups. I definitely am kicking myself for not checking out caddy sooner: it could have saved me countless hours installing and maintaining https certs or configuring lighttpd in general.


~ Crossplatform JNI builds with Zig

JNI is a way to use C or C++ code from Java and allows developers to reuse and integrate C/C++ in Java software. In contrast to the Java code, C/C++ code is platform dependent and needs to be compiled for each platform/architecture. Also it is generally not a good idea to make users compile a C/C++ library: it is best provide precompiled libraries. As a developer it is, however, a pain to provide binaries for each platform.

With the dominance of x86 processors receding the problem of having to compile software for many platforms is becoming more pressing. It is not unthinkable to want to support, for example, intel and M1 macOS, ARM and x86_64 Linux and Windows. To support these platforms you would either need access to such a machine with a compiler or configure a cross-compiler for each system: both are unpractical. Typically setting up a cross-compiler can be time consuming and finicky and virtual machines can be tough to setup. There is however an alternative.

Zig is a programming language but, thanks to its support for C/C++, it also ships with an easy-to-use cross-compiler which is of interest here even if you have no intention to write a single line of Zig code. The built-in cross-compiler allows to target many platforms easily.


Zig logo

The Zig cross-compiler in practice

Cross compilation of C code is possible by simply replacing the gcc command with zig cc and adding a target argument, e.g. for targeting a Windows. There is more general information on zig as a cross-compiler here.

Cross-compiling a JNI library is not different to compiling other libraries. To make things concrete we will cross-compile a library from a typical JNI project: JGaborator packs the C/C++ library gaborator. In this case the C/C++ code does a computationally intensive spectral transformation of time domain data. The commands below create an x86_64 Windows DLL from a macOS with zig installed:


bash
#wget https://aka.ms/download-jdk/microsoft-jdk-17.0.5-windows-x64.zip
#unzip microsoft-jdk-17.0.5-windows-x64.zip
#export JAVA_HOME=`pwd`/jdk-17.0.5+8/
git clone --depth 1 https://github.com/JorenSix/JGaborator
cd JGaborator/gaborator
echo $JAVA_HOME
JNI_INCLUDES=-I"$JAVA_HOME/include"\ -I"$JAVA_HOME/include/win32" 
zig cc  -target x86_64-windows-gnu -c -O3 -ffast-math -fPIC pffft/pffft.c -o pffft/pffft.o
zig cc  -target x86_64-windows-gnu -c -O3 -ffast-math -fPIC -DFFTPACK_DOUBLE_PRECISION pffft/fftpack.c -o pffft/fftpack.o
zig c++ -target x86_64-windows-gnu -I"pffft" -I"gaborator-1.7"  $JNI_INCLUDES -O3\
        -ffast-math -DGABORATOR_USE_PFFFT  -o jgaborator.dll jgaborator.cc pffft/pffft.o pffft/fftpack.o
file jgaborator.dll
# jgaborator.dll: PE32+ executable (console) x86-64, for MS Windows

Note that, when cross-compiling from macOS, to target Windows a Windows JDK is needed. The windows JDK has other header files like jni.h. Some commands to download and use the JDK are commented out in the example above. Also note that targeting Linux from macOS seems to work with the standard macOS JDK. This is probably due to shared conventions regarding compilation of libraries.

To target other platforms, e.g. ARM Linux, there are only two things that need to be changed: the -target switch should be changed to aarch64-linux-gnu and the name of the output library should be (by Linux convention) changed to libjgaborator.so. During the build step of JGaborator a list of target platforms it iterated and a total of 9 builds are packaged into a single Jar file. There is also a bit of supporting code to load the correct version of the library.

Using a GitHub action or similar CI tools this cross compilation with zig can be automated to run on a software release. For Github the Setup Zig action is practical.

Loading the correct library

In a first attempt I tried to detect the operating system and architecture of the environment to then load the library but eventually decided against this approach. Mainly because you then need to keep an exhaustive list of supporting platforms and this is difficult, error prone and decidedly not future-proof.

In my second attempt I simply try to load each precompiled library limited to the sensible ones – only dll’s on windows – until a matching one is loaded. The rationale here is that the system itself knows best which library works and failing to load a library is computationally cheap. There is some code to iterate all precompiled libraries in a JAR-file so supporting an additional platform amounts to adding a precompiled library in the JAR folder: there is no need to be explicit in the Java code about architectures or OSes.

Trying multiple libraries has an additional advantage: this allows to ship multiple versions targeting the same architecture: e.g. one with additional acceleration libraries enabled and one without. By sorting the libraries alphabetically the first, then, should be the one with acceleration and the fallback without. In the case of JGaborator for mac aarch64 there is one compiled with -framework Accelerate and one compiled by the Zig cross-compiler without.

Takehome messages

If you find this valuable consider sponsoring the work on Zig


~ Emotopa - Patterns in Pitch Organization


Screenshot of a browser based pitch organization extraction tool

Fig: Screenshot of Emotopa: a browser based tool to extract pitch organization from audio.

A couple of days ago I participated in the Music Hack Day – India. The event was organized the 10th and 11th of December in Bangaluru, India. During the event a representative of Smule suggested a task to evaluate the performance of karaoke-singers in terms of intonation. The idea was to employ pitch histogram like features to estimate pitch use of singers.

I offered to build a browser based application to extract pitch histograms from audio. At the end of the hack day I presented the first release of Emotopa with some limited functionality:

  1. The application is able to decode and use audio from any format or container by using an audio focused webassambly build of ffmpeg.
  2. Next, a pitch detector runs on the audio and returns a list of pitch estimates.
  3. Finally a histogram (technically a kernel density estimate) is constructed using the pitch estimates.

The user can export the pitch histogram, the pitch class histogram and the pitch annotations. These features successfully show the intonation quality of singers but the applications are much broader. Some potential applications have been described in (amongst others) the Tarsos article.

The Emotopa name alludes to the Apotome browser based app where, starting from a scale you can make music. With Emotopa you do the reverse. Also very much of interest are Leimma and the rationale behind both Apotome and Leimma

The source can be verified on the Emotopa GitHub repository

This contribution was made possible thanks to travel funds by the FWO travel grant K1D2222N and the Ghent University BOF funded project PaPiOM.


~ DiscStitch & BAF - Contributions to ISMIR 2022

This year the ISMIR 2022 conference is organized from 4 to 9 December 2022 in Bengaluru, India. ISMIR is the main music technology and music information retrieval (MIR) conference. It is a relief to experience a conference in physical form and not through a screen.

I have contributed to the following work which is in the main paper track of ISMIR 2022:

BAF: An Audio Fingerprinting Dataset For Broadcast Monitoring (version of record)
Guillem Cortès, Alex Ciurana, Emilio Molina, Marius Miron, Owen Meyers, Joren Six, Xavier Serra

Abstract: Audio Fingerprinting (AFP) is a well-studied problem in music information retrieval for various use-cases e.g. content-based copy detection, DJ-set monitoring, and music excerpt identification. However, AFP for continuous broadcast monitoring (e.g. for TV & Radio), where music is often in the background, has not received much attention despite its importance to the music industry. In this paper (1) we present BAF, the first public dataset for music monitoring in broadcast. It contains 74 hours of production music from Epidemic Sound and 57 hours of TV audio recordings. Furthermore, BAF provides cross-annotations with exact matching timestamps between Epidemic tracks and TV recordings. Approximately, 80% of the total annotated time is background music. (2) We benchmark BAF with public state-of-the-art AFP systems, together with our proposed baseline PeakFP: a simple, non-scalable AFP algorithm based on spectral peak matching. In this benchmark, none of the algorithms obtain a F1-score above 47%, pointing out that further research is needed to reach the AFP performance levels in other studied use cases. The dataset, baseline, and benchmark framework are open and available for research.

I have also presented a first version of DiscStitch, an audio-to-audio alignment algorithm. This contribution is in the ISMIR 2022 late breaking demo session:

DiscStitch: towards audio-to-audio alignment with robustness to playback speed variabilities (version of record)
Joren Six

Abstract: Before magnetic tape recording was common, acetate discs were the main audio storage medium for radio broadcasters. Acetate discs only had a capacity to record about ten minutes. Longer material was recorded on overlapping discs using (at least) two recorders. Unfortunately, the recorders used were not reliable in terms of recording speed, resulting in audio of variable speed. To make digitized audio originating from acetate discs fit for reuse, (1) overlapping parts need to be identified, (2) a precise alignment needs to be found and (3) a mixing point suggested. All three steps are challenging due to the audio speed variabilities. This paper introduces the ideas behind DiscStitch: which aims to reassemble audio from overlapping parts, even if variable speed is present. The main contribution is a fast and precise audio alignment strategy based on spectral peaks. The method is evaluated on a synthetic data set.

Next to my own contributions, the ISMIR conference program is the best overview of the state-of-the art of MIR.

This contribution was made possible thanks to travel funds by the FWO travel grant K1D2222N and the Ghent University BOF funded project PaPiOM.


~ Panako: a scalable audio search system

Fig: DALL.E 2 imagining a fight between papers and software.

Recently I have published a paper titled ‘Panako: a scalable audio search system’ in the Journal of Open Source Software (JOSS). The journal is a ‘hack’ to circumvent the focus on citable papers in the academic world: getting recognition for publishing software as a researcher is not straightforward.

For example, Ghent University’s biblio and for the FWO’s academic profile do not allow to enter software as research output. The focus is still solely on papers, even when custom developed research software has become a fundamental aspect in many research areas. My role is somewhere between that of a ‘pure’ researcher and that of a research software engineer which makes this focus on papers quite relevant to me.

The paper aims to make the recent development on Panako ‘count’. Thanks to the JOSS review process the Panako software was improved considerably: CI, unit tests, documentation, containerization,… The paper was a good reason to improve on all these areas which are all too easy to neglect. The paper itself is a short, rather general overview of Panako:

Panako solves the problem of finding short audio fragments in large digital audio archives. The content based audio search algorithm implemented in Panako is able to identify a short audio query in a large database of thousands of hours of audio using an acoustic fingerprinting technique.


~ Low impact runner: a music based bio-feedback system

Fig: schema of the low impact runner system.

I have been lucky to have been involved in an interdisciplinary research project around the low impact runner: a music based bio-feedback system to reduce tibial shock in over-ground running. In the beginning of October 2022 the PhD defence of Rud Derie takes place so it is a good moment to look back to this collaboration between several branches of Ghent University: IPEM , movement and sports science and IDLab.

The idea behind the project was to first select runners with a high foot-fall impact. Then an intervention would slightly nudge these runner to a running style with lower impact. A lower repetitive impact is expected to reduce the chance on injuries common for runners. A system was invented in which musical bio-feedback was given on the measured impact. The schema to the right shows the concept.

I was involved in development of the first hardware prototypes which measured acceleration on the legs of the runner and the development of software to receive and handle these measurement on a tablet strapped to a backpack the runner was wearing. This software also logged measurements, had real-time visualisation capabilities and allowed remote control and monitoring over the network. Finally measurements were send to a Max/MSP sonification engine. These prototypes of software and hardware were replaced during a valorization project but some parts of the software ended up in the final Android application.

Video: the left screen shows the indoor positioning system via UWB (ultra-wide-band) and the right screen shows the music feedback system and the real time monitoring of impact of the runner. Video by Pieter Van den Berghe

Over time the first wired sensors were replaced with wireless Bluetooth versions. This made the sensors easy to use and also to visualize sensor values in the browser thanks to the Web Bluetooth API. I have experimented with this and made two demos: a low impact runner visualizer and one with the conceptual schema.

Vid: Visualizing the Bluetooth Low Impact Runner sensor in the browser.

The following three studies shows a part of the trajectory of the project. The first paper is a validation of the measurement system. Secondly a proof-of-concept study is done which finally greenlights a larger scale intervention study.

  1. Van den Berghe, P., Six, J., Gerlo, J., Leman, M., & De Clercq, D. (2019). Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds. Journal of Biomechanics, 86, 238-242.
  2. Van den Berghe, P., Lorenzoni, V., Derie, R., Six, J., Gerlo, J., Leman, M., & De Clercq, D. (2021). Music-based biofeedback to reduce tibial shock in over-ground running: A proof-of-concept study. Scientific reports, 11(1), 1-12.
  3. Van den Berghe, P., Derie, R., Bauwens, P., Gerlo, J., Segers, V., Leman, M., & De Clercq, D. (2022). Reducing the peak tibial acceleration of running by music‐based biofeedback: A quasi‐randomized controlled trial. Scandinavian Journal of Medicine & Science in Sports

There are quite a number of other papers but I was less involved in those. The project also resulted in two PhD’s:

I am also recognized as co-inventor on the low impact runner system patent and there are concrete plans for a commercial spin-off. To be continued…



~ SMPTE decoding in the browser

I have created a web application to LTC.wasm decodes SMPTE timecodes from an LTC encoded audio signal.

To synchronize multiple music and video recordings a shared SMPTE timecode signal is often used. For practical purposes the timecode signal is encoded in an audio stream. The timecode can then be recorded in sync with microphone inputs or added to a video recording. The timecode is encoded in audio with LTC, linear timecode. A special decoder is needed to extract SMPTE timecode from the audio. This is exactly what the LTC.wasm application does.


Using the web based SMTE decoder

Try out the SMPTE decoder with your own SMPTE files.

The advantage of the web-based version versus the command line ltc-tools is that it does not need to be installed separately and that ffmpeg decodes audio. This means that almost any multimedia format is supported automatically. The command line version only supports a limited number of audio formats.

For further information check out the the LTC.wasm GitHub repository


~ SyncSink.wasm - Synchronize media files by audio-to-audio alignment

I have built a tool for audio-to-audio alignment. It has applications for synchronization of media files. It works in the browser and you can synchronize your media files here with SyncSink.wasm. SyncSink.wasm does the following:

  1. From an incoming media-file audio is extracted, downmixed to mono and and resampled. This is done with ffmpeg.audio.wasm a wasm version of ffmpeg.
  2. For each audio track, fingerprints are extracted. These fingerprints reduce the the search space for alignment drastically.
  3. Each list of fingerprints is aligned with the list of fingerprints from the reference. Resulting in a rough alignment
  4. Cross correlation is done to refine the alignment resulting in sample accurate results.


Fig: media synchronization with audio-to-audio alignment.

It supports small time-scale adjustments of around 5%: audio alignment can still be found if audio speed differs a bit.

Some potential use cases where it might be of use:

The code can be found in the SyncSink.wasm GitHub repository


~ Sending audio over a network with ffmpeg

Fig: stable diffusion imagining a networked music performance

This post describes how to send audio over a network using the ffmpeg suite. Ffmpeg is the Swiss army knife for working with audio and video formats. It is a command line tool that supports almost all audio formats known to man and woman. ffmpeg also supports streaming media over networks.

Here, we want to send audio recorded by a microphone, over a network to a single receiver on the other end. We are not aiming for low latency. Also the audio is going only in a single direction. This can be of interest for, for example, a networked music performance. Note that ffmpeg needs to be installed on your system.

The receiver – Alice

For the receiver we use ffplay, which is part of the ffmpeg tools. The command instructs the receiver to listen to TCP connections on a randomly chosen port 12345. The \?listen is important since this keeps the program waiting for new connections. For streaming media over a network the stateless UDP protocol is often used. When UDP packets go missing they are simply dropped. If only a few packets are dropped this does not cause much harm for the audio quality. For TCP missing packets are resent which can cause delays and stuttering of audio. However, TCP is much more easy to tunnel and the stuttering can be compensated with a buffer. Using TCP it is also immediately clear if a connection can be made. With UDP packets are happily sent straight to the void and you need to resort to wiresniffing to know whether packets actually arrive.

ffplay -nodisp -f mpegts tcp://0.0.0.0:12345\?listen

In this example we use MPEGTS over a plain TCP socket connection. Alteratively RTMP could be used (which also works over TCP). RTP , however is usually delivered over UDP.

The shorthand address 0.0.0.0 is used to bind the port to all available interfaces. Make sure that you are listening to the correct interface if you change the IP address.

The sender – Björn

Björn, aka Bob, sends the audio. First we need to know from which microphone to use. To that end there is a way to list audio devices. In this example the macOS avfoundation system is used. For other operating systems there are similar provisions.

ffmpeg -f avfoundation -list_devices true -i ""

Once the index of the device is determined the command below sends incoming audio to the receiver (which should already be listening on the other end). The audio format used here is MP3 which can be safely encapsulated into mpegts.

Note that the IP address 192.168.x.x needs to be changed to the address of the receiver. Now if both devices are on the same network the incoming audio from Bob should arrive at the side of Alice.

The tunnel

If sender and receiver are not on the same network it might be needed to do Network Addres Translation (NAT) and port forwarding. Alternatively an ssh tunnel can be used to forward local tcp connections to a remote location. So on the sender the following command would send the incoming audio to a local port:

ffmpeg -f avfoundation -i ":1" -acodec libmp3lame -ab 196k -f mpegts tcp://192.168.x.x:12345

The connection to the receiver can be made using a local port forwarding tunnel. With ssh the TCP traffic on port 12345 is forwarded to the remote receiver via an intermediary (remote) host using the following command:

ssh -v -L 12345:192.168.x.x:12345 user@host -N 

~ Using Java LMDB on Apple Sillicon or other unsupported platforms

LMDB is a fast key value store, ideal to store and query sorted data with small keys and values. LMDB is a pure C library but often used from other programming languages via some type of bindings. These bindings are ‘bridges’ between languages and are automatically present on supported platform. On new or unsupported platforms, however, you need to build a this bridge yourself.

This blog post is about getting java-lmdb working on such unsupported platform: arm64. The arm64 platform is much more popular since the introduction of the Apple silicon – M1 platform. On Apple M1 the default architecture of Docker images is also aarch64.

The java-lmdb project uses JNR-FFI in the background. This is only one of the many ways to bridge Java and other programming languages. The new version of JNR-FFI supports the arm64. Currently, only the SNAPSHOT’ version of java-lmdb uses this version. So the dependencies need to be changed to e.g. (when using Gradle):

repositories {
    mavenCentral()
    maven { url 'https://oss.sonatype.org/content/repositories/snapshots' }
}

dependencies {
    implementation group: 'org.lmdbjava', name: 'lmdbjava', version: '0.8.3-SNAPSHOT'
}

Next you need to build the lmdb library for your platform and copy it to a location where Java looks for it. This only works when compilers are already available on your system. In macOS you might need to install the XCode command line tools:

#xcode-select --install
git clone --depth 1 https://git.openldap.org/openldap/openldap.git
cd openldap/libraries/liblmdb
make -e SOEXT=.dylib
cp  liblmdb.dylib ~/Library/Java/Extensions

On Debian aarch64 the procedure is similar but a different extension is used (.so):

#apt install build-essential
git clone --depth 1 https://git.openldap.org/openldap/openldap.git
cd openldap/libraries/liblmdb
make
mv liblmdb.so /lib

Finally, to use the library in a JAR-file is might be needed to allow lmdbjava to access some parts of the JRE:

java -jar your_jar.jar --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED

A very similar setup was needed for the docker version of Panako.