0110.be logo

~ Guest lecture on `Music Information Retrieval - Opportunities for digital musicology'

This morning I gave a guest lecture introducing the field of music information retrieval to musicology students at Ghent University. Next to the more general MIR intro, two specific topics are fleshed out: duplicate detection and pitch patterns in music around the world. Two topic I have been working on before.

The presentation has the form of an interactive website via reveal.js. It features a couple of slides which are full-blown applications or have an interactive sound visualization component. Please do try out the slides and check the Music Information Retrieval - Opportunities for digital musicology presentation or try it below.


~ USB MIDI support on ESP32: the ESP32-S3

If you’re considering adding USB MIDI functionality to a music project with an ESP32, it’s crucial to choose the right variant of the chip. The ESP32-S3 is currently the go-to model for USB-related tasks thanks to its native USB capabilities. Unlike other ESP32 models, the S3 can handle USB MIDI directly without the need for additional components, making it an ideal choice for integrating MIDI devices into your setup. For more details on using USB MIDI with the ESP32-S3, check out the ESP32USBMIDI project.

When combined with the ESP32-S3’s built-in WiFi and support for OSC (Open Sound Control) or ESP Now, the platform becomes very versatile for music controllers or applications. A quick tip: after flashing your device in MIDI mode, the serial is not available any more. Flashing the device also becomes impossible. If you need to reflash your device, the process is simple: just hold down the Boot button and press Reset.

Another short tip: for troubleshooting and logging, the mot project provides useful tools for debugging OSC or MIDI messages. The support is currently stil in flux but do not make the mistake I made and do not try to do MIDI with a ESP C3 series.


~ Pompernikkel - the Interactive speaking pumpkin 🎃

The last few halloweens I have been building one-off interactive installations for visiting trick-or-treaters. I did not document the build of last year, but the year before I built an interactive door bell with a jump scare door projection. This year I was trying to take it easy but my son came up with the idea of doing something with a talking pumpkin. I mumbled something about feasibility so he promptly invited all his friends to come over on Halloween to talk to a pumpkin. So I got to work and tried to build something. This blog post documents this build.

A talking pumkin needs a few functions. It needs to understand kids talking in Dutch, it needs to be able to respond with a somewhat logical respons and ideally have a memory about previous interactions. It also needs a way to do turn-taking: indicating who is speaking and listening. It also needs a face and a name. For the name we quickly settled on Pompernikkel.

For the face I tried a few interactive visualisations: a 3D implementation with three.js and a shader based approach but eventually setteled on an approach of using an SVG and CSS animations to make the face come alive. This approach makes it doable to control animations with javascript since animating a part of the pumkin means adding or removing a css class. See below for the result

The other functions I used the following components.

As an extra feature, I implemented a jump scare where a sudden movement would trigger lightning and thunder:

Lessons learned

Most trick-or-treaters were at least intrigued by it, my son’s friends were impressed, and I got to learn a couple of things, see above. Next year, however, I will try to take it easy.


~ FFmpeg with Whisper support on macOS via Homebrew

Since a couple of months FFmpeg supports audio transcription via OpenAI Whisper and Wisper-cpp. This allows to automatically transcribe interviews and podcasts or generate subtitles for videos. Most packaged versions of the command line tool ffmpeg do not ship with this option enabled. Here we show how to do this on macOS with the Homebrew package manager. On other platforms similar configuration will apply.

On macOS there is a prepared Homebrew keg which allows to enable or disable the many ffmpeg options. If you already have ffmpeg without options installed you may need to uninstall the current version and install a version with chosen options. See below on how to do this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# check if you already have ffmpeg with whisper enabled
ffmpeg --help filter=whisper

# uninstall current ffmpeg, it will be replaced with a version with whisper
brew uninstall ffmpeg

# add a brew tap which provides options to install ffmpeg from source
brew tap homebrew-ffmpeg/ffmpeg

# this commands adds most common functionality and other default functions
brew install homebrew-ffmpeg/ffmpeg/ffmpeg \
--with-fdk-aac \
--with-jpeg-xl \
--with-libgsm \
--with-libplacebo \
--with-librist \
--with-librsvg \
--with-libsoxr \
--with-libssh \
--with-libvidstab \
--with-libxml2 \
--with-openal-soft \
--with-openapv \
--with-openh264 \
--with-openjpeg \
--with-openssl \
--with-rav1e \
--with-rtmpdump \
--with-rubberband \
--with-speex \
--with-srt \
--with-webp \
--with-whisper-cpp

Installation will take a while since many dependencies are required for the many options. Once the build is finished the whisper filter should be available in FFmpeg. See below on how this should look, once correctly installed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
ffmpeg version 8.0 Copyright (c) 2000-2025 the FFmpeg developers
  built with Apple clang version
        ...
Filter whisper
  Transcribe audio using whisper.cpp.
    Inputs:
       #0: default (audio)
    Outputs:
       #0: default (audio)
whisper AVOptions:
   model             <string>     ..F.A...... Path to the whisper.cpp model file
   language          <string>     ..F.A...... Language for transcription ('auto' for auto-detect) (default "auto")
   queue             <duration>   ..F.A...... Audio queue size (default 3)
   use_gpu           <boolean>    ..F.A...... Use GPU for processing (default true)
   gpu_device        <int>        ..F.A...... GPU device to use (from 0 to INT_MAX) (default 0)
   destination       <string>     ..F.A...... Output destination (default "")
   format            <string>     ..F.A...... Output format (text|srt|json) (default "text")
   vad_model         <string>     ..F.A...... Path to the VAD model file
   vad_threshold     <float>      ..F.A...... VAD threshold (from 0 to 1) (default 0.5)
   vad_min_speech_duration <duration>   ..F.A...... Minimum speech duration for VAD (default 0.1)
   vad_min_silence_duration <duration>   ..F.A...... Minimum silence duration for VAD (default 0.5)

~ MuTechLab - Music Technology Workshop in Luxembourg

Last Friday, I had the pleasure of facilitating a hands-on workshop in Luxembourg as part of MuTechLab workshop series, organized by Luc Nijs at the University of Luxembourg. Together with Bart Moens from XRHIL and IPEM, we presented a system to control musical parameters with body movement.

MuTechLab is a series of workshops for music teachers who wish to dive into the world of music technology. Funded by the Luxembourgish National Research Fund (FNR, PSP-Classic), the initiative brings together educators eager to explore how technology can enhance music education and creative practice.

What we built and presented

During the workshop, participants got hands-on experience with the EMI-Kit (Embodied Music Interface Kit) – an open-source, low-cost system that allows musicians to control Digital Audio Workstation (DAW) parameters through body movement.

The EMI-Kit consists of: - A wearable sensor device (M5StickC Plus2) that captures body orientation and gestures - A receiver unit (M5Stack STAMP S3A) that converts sensor data to MIDI messages

Unlike expensive commercial alternatives, EMI-Kit is fully open source, customizable, and designed specifically for creative music practice and embodied music interaction practice and research.

The Experience

Teachers experimented with mapping natural body movements – pitch, yaw, roll, and tap gestures – to various musical parameters in their DAWs. The low-latency wireless system made it possible to move and control sound, opening up new possibilities for expressive musical performance and pedagogy.

Learn More

Interested in exploring embodied music interaction yourself? Check out:

The EMI-Kit project as-is is a demonstrator to inspire educators to embrace these tools and imagine new ways of teaching and creating music. The EMI-Kit as a platform can - with some additional programming - be a good basis to control musical parameters using various sensors. Have fun with checking out the EMI-Kit.


~ MIDI and OSC tools improvements - MIDI processing and mDNS support

I’ve just pushed some updates to mot — a command-line application for working with OSC and MIDI messages. My LLM tells me that these are exciting updates but I am not entirely sure that this is the case. Let me know if this ticks your box and seek professional help.

1. Scriptable MIDI Processor via Lua

I have implemented a MIDI processor that lets you transform, filter, and generate MIDI messages using Lua scripts.

Why is this useful? MIDI processors act as middlemen between your input devices and output destinations.You can do the following on incoming MIDI messages:

MIDI Device (Keyboard, Pad, etc.) MIDI In Note On C4 mot midi_processor 🌙 Lua Script process_message() Transform • Filter • Generate C4 → C4 + E4 + G4 (chord) MIDI Out 3 notes Virtual MIDI Device (DAW, Synth, etc.)

The processor reads incoming MIDI from a physical device, processes it through your Lua script, and outputs the modified messages to a virtual MIDI port that your DAW or synth can receive. Some examples:

1
2
3
4
5
# Generate chords from single notes
mot midi_processor --script scripts/chord_generator.lua 0 6666

# Transpose notes up by one octave
mot midi_processor --script scripts/example_processor.lua 0 6666

2. Network Discovery via mDNS

OSC receivers now advertise themselves on the network using mDNS/Bonjour with the _osc._udp service type.

This makes mot compatible with the EMI-kit — the Embodied Music Interface Kit developed at IPEM, Ghent University. OSC-enabled devices can automatically discover mot receivers on your network, eliminating manual configuration if the OSC sources add this functionality.

Get started

Installation via Rust’s cargo:

1
2
3
4
git clone https://github.com/JorenSix/mot.git
cd mot
cargo install --path .
mot midi_processor -h

Check out the mot repository for full documentation and example Lua scripts!


~ Newline.gent - A yearly hacker conference

This weekend the - more-or-less - yearly conference of Hackerspace Ghent took place: Newline.gent. Hackers, makers, and curious minds gathered to share ideas, tools, experiments and a few beers.

I had a small contribution with a short lecture-performance which covered how to control your computer with a flute. The lecture part covered the technical part of the build, the performance part included playing Flappy Bird with a flute. A third significant part of the talk — arguably the main focus — was devoted to bragging about the global attention the project received.

Other highlights of the Newline conference included talks on Home Assistant, 3D design, BTRFS and workshops that invited everyone to get involved.

Big thanks to the organizers and everyone who joined. I’m already looking forward to the next one!


~ Local TLS certificates with Caddy

This short guide will help you set up a local certificate using Caddy as the webserver to provide local TLS certificates to be able to develop websites immedately using HTTPS. Having a local HTTPS server in development can help with e.g. debugging CORS issues, accessing resources which require a HTTPS connection, or trying out analytics platforms.

1. Configure your hosts file

If you want to use a domain name, you need to first add a line to /etc/hosts which, in this case, sets localhost to correspond to example.com.

1
echo "127.0.0.1 example.com" | sudo tee -a /etc/hosts

2. Configure Caddy

In a directory of your choosing, create a Caddyfile with the following content, it sets Caddy to automatically generate certificates on the fly for example.com or any other domain name. Perhaps you will need to trust the main Caddy certificate on first use:

1
2
3
4
5
6
7
8
9
10
{
    # Enable the internal CA
    local_certs
}

example.com {
    root * .
    file_server
    tls internal
}

3. Create a test site

In the same directory, create an index.html file with contents similar or use your local web the following, or :

1
2
3
4
5
6
7
8
<html lang="en">
<head>
    <meta charset="UTF-8">
</head>
<body>
    <h1>Hello World!</h1>
</body>
</html>

4. Start the Webserver

Still in the same directory as the Caddyfile and the index.html file, run the following command to start the Caddy web server: caddy run

5. Trust the locally generated certificate

In macOS this means adding the local caddy root certificate to your keychain. It can be found here /data/caddy/pki/authorities/local/root.crt In other environments a similar step is needed.

6. access the Test Site

Open your web browser and navigate to https://example.com to access the test site in the command line: open https://example.com. If you inspect the certificate it should be issued by the ‘Caddy local authority’.


~ Powering low current electronics projects with power banks

Power banks have become a staple for charging smartphones, tablets, and other devices on the go. They seem ideal to power small microcontroller projects but, they often pose a problem for low-current applications. Most modern power banks include an auto-shutdown feature to conserve energy when they detect a current draw below a specific threshold, often around 50–200mA. The idea being that the power bank can shut off after charging a smartphone. However, if you rely on power banks to power DIY electronics projects or remote applications with low current draw, this auto-off feature can be a significant inconvenience.

To address this issue, consider using power banks designed with an “always-on” or “low-current” mode. These power banks are engineered to sustain power delivery even when the current draw is minimal. Look for models that explicitly mention support for low-power devices in their specifications. If replacing a power bank isn’t an option, you can add a small load resistor or a USB dummy load to artificially increase the current draw. It works, but feels wrong and dirty.

For a previous electronics project I bought a power bank randomly. After a bit of testing, I determined that the minimal power draw was around 150mA, so I added a resistor to increase current draw. Only afterwards did I check the manual of the power bank and noticed, luckily, that there was a low-current mode. I removed the resistor and improved the battery life of the project considerably. If you want to power your DIY Arduino or electronics project, first check the manual of the power bank you want to use!

Edit: after further testing it seemed that the low current mode of this specific power bank still shuts down after a couple of hours. Your mileage may vary, and the main point of this post still holds: check the manual of your power bank. Eventually I went with a solution designed for electronics projects.


~ When both tech and nature call: a DIY motion sensor to activate birdsong

There is this thing that starts playing birdsong when it detects movement. It is ideal to connect to nature while nature calls. It is a good idea, executed well but it got me thinking: this can be made less reliable, more time consuming, more expensive, and with a shorter battery life. So I started working on a DIY version.


Vid: Playing birdsong when presence is detected with an ESP32 microcontroller .

The general idea is to start playing birdsong if someone is present in a necessary room. In addition to a few of electronics components the project needs birdsong recordings. Freesound is a great resource for all kinds of environmental sounds and has a collection of birdsong which was used for this project.

For the electronics components the project needs a microcontroller and a way to detect presence. I had a laser ranging sensor lying around which measures distance but can be repurposed to detect presence in a small room: most of the time, the distance to an opposite wall is reported. If a smaller distance is measured it is probably due to a person being present. The other components:

As is often the case with builds like this, neither the software nor the hardware is challenging conceptually but, making hard and software cooperate is. Some pitfalls I encountered: the ESP32 C6 needs USB CDC set in the Arduino IDE, the non standard I2C GPIO pins. Getting the many I2S parameters right. Dealing with a nasty pop sound once audio started. A broken LiPo battery. Most of the fixes can be found in the Arduino code

I use a polling strategy to detect presence. A distance measurement is taken and then the ESP32 goes into a deep sleep until the next measurement. A sensor with the ability to wake up the microcontroller would be a better approach.

Once everything was installed it worked well enough — motion triggered a random birdsong, creating a soothing, natural vibe. It may be less practical than the off-the-shelf version but I did learn quite a lot more than I would have by simply filling in a form and providing payment details…


Previous blog posts

16-01-2025 ~ The time an HTTP request takes to pass through a proxy, a proxy, a proxy, a proxy, ...

03-01-2025 ~ Tasmota for custom ESP32 smart home devices

27-11-2024 ~ GhentCDH at the Faculty Research Day

23-10-2024 ~ Connecting two Bluetooth headsets to your Linux system: audio routing in PipeWire

01-10-2024 ~ Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds

24-04-2024 ~ OnTracx product launch - a Ghent University sports-tech spin-off

22-04-2024 ~ Making a flute controlled mouse

28-03-2024 ~ Measuring rain water tank level with an Arduino

13-03-2024 ~ Offloading authentication and user management to Keycloak - A minimal example with Nuxt and Litestar

21-02-2024 ~ 3D modeling with ChatGPT - Solidified ephemerality