Hi, I'm Joren. Welcome to my website. I'm a research software engineer in the field of Music Informatics and Digital Humanities. Here you can find a record of my research and projects I have been working on. Learn more »
Last Friday, I had the pleasure of facilitating a hands-on workshop in Luxembourg as part of MuTechLab workshop series, organized by Luc Nijs at the University of Luxembourg. Together with Bart Moens from XRHIL and IPEM, we presented a system to control musical parameters with body movement.
MuTechLab is a series of workshops for music teachers who wish to dive into the world of music technology. Funded by the Luxembourgish National Research Fund (FNR, PSP-Classic), the initiative brings together educators eager to explore how technology can enhance music education and creative practice.
What we built and presented
During the workshop, participants got hands-on experience with the EMI-Kit (Embodied Music Interface Kit) – an open-source, low-cost system that allows musicians to control Digital Audio Workstation (DAW) parameters through body movement.
The EMI-Kit consists of:
- A wearable sensor device (M5StickC Plus2) that captures body orientation and gestures
- A receiver unit (M5Stack STAMP S3A) that converts sensor data to MIDI messages
Unlike expensive commercial alternatives, EMI-Kit is fully open source, customizable, and designed specifically for creative music practice and embodied music interaction practice and research.
The Experience
Teachers experimented with mapping natural body movements – pitch, yaw, roll, and tap gestures – to various musical parameters in their DAWs. The low-latency wireless system made it possible to move and control sound, opening up new possibilities for expressive musical performance and pedagogy.
Learn More
Interested in exploring embodied music interaction yourself? Check out:
The EMI-Kit project as-is is a demonstrator to inspire educators to embrace these tools and imagine new ways of teaching and creating music. The EMI-Kit as a platform can - with some additional programming - be a good basis to control musical parameters using various sensors. Have fun with checking out the EMI-Kit.
Participant package - with sender and receiver pair
I’ve just pushed some updates to mot — a command-line application for working with OSC and MIDI messages. My LLM tells me that these are exciting updates but I am not entirely sure that this is the case. Let me know if this ticks your box and seek professional help.
1. Scriptable MIDI Processor via Lua
I have implemented a MIDI processor that lets you transform, filter, and generate MIDI messages using Lua scripts.
Why is this useful? MIDI processors act as middlemen between your input devices and output destinations.You can do the following on incoming MIDI messages:
Filter - Block unwanted messages - channels - or select specific ranges
Route - Send different notes to different channel
Generate - Create complex patterns from simple input
The processor reads incoming MIDI from a physical device, processes it through your Lua script, and outputs the modified messages to a virtual MIDI port that your DAW or synth can receive. Some examples:
# Generate chords from single notes
mot midi_processor --script scripts/chord_generator.lua 06666# Transpose notes up by one octave
mot midi_processor --script scripts/example_processor.lua 06666
2. Network Discovery via mDNS
OSC receivers now advertise themselves on the network using mDNS/Bonjour with the _osc._udp service type.
This makes mot compatible with the EMI-kit — the Embodied Music Interface Kit developed at IPEM, Ghent University. OSC-enabled devices can automatically discover mot receivers on your network, eliminating manual configuration if the OSC sources add this functionality.
This weekend the - more-or-less - yearly conference of Hackerspace Ghent took place: Newline.gent. Hackers, makers, and curious minds gathered to share ideas, tools, experiments and a few beers.
I had a small contribution with a short lecture-performance which covered how to control your computer with a flute. The lecture part covered the technical part of the build, the performance part included playing Flappy Bird with a flute. A third significant part of the talk — arguably the main focus — was devoted to bragging about the global attention the project received.
Other highlights of the Newline conference included talks on Home Assistant, 3D design, BTRFS and workshops that invited everyone to get involved.
Big thanks to the organizers and everyone who joined. I’m already looking forward to the next one!
This short guide will help you set up a local certificate using Caddy as the webserver to provide local TLS certificates to be able to develop websites immedately using HTTPS. Having a local HTTPS server in development can help with e.g. debugging CORS issues, accessing resources which require a HTTPS connection, or trying out analytics platforms.
1. Configure your hosts file
If you want to use a domain name, you need to first add a line to /etc/hosts which, in this case, sets localhost to correspond to example.com.
echo "127.0.0.1 example.com" | sudo tee -a /etc/hosts
2. Configure Caddy
In a directory of your choosing, create a Caddyfile with the following content, it sets Caddy to automatically generate certificates on the fly for example.com or any other domain name. Perhaps you will need to trust the main Caddy certificate on first use:
Still in the same directory as the Caddyfile and the index.html file, run the following command to start the Caddy web server: caddy run
5. Trust the locally generated certificate
In macOS this means adding the local caddy root certificate to your keychain. It can be found here /data/caddy/pki/authorities/local/root.crt In other environments a similar step is needed.
6. access the Test Site
Open your web browser and navigate to https://example.com to access the test site in the command line: open https://example.com. If you inspect the certificate it should be issued by the ‘Caddy local authority’.
Power banks have become a staple for charging smartphones, tablets, and other devices on the go. They seem ideal to power small microcontroller projects but, they often pose a problem for low-current applications. Most modern power banks include an auto-shutdown feature to conserve energy when they detect a current draw below a specific threshold, often around 50–200mA. The idea being that the power bank can shut off after charging a smartphone. However, if you rely on power banks to power DIY electronics projects or remote applications with low current draw, this auto-off feature can be a significant inconvenience.
To address this issue, consider using power banks designed with an “always-on” or “low-current” mode. These power banks are engineered to sustain power delivery even when the current draw is minimal. Look for models that explicitly mention support for low-power devices in their specifications. If replacing a power bank isn’t an option, you can add a small load resistor or a USB dummy load to artificially increase the current draw. It works, but feels wrong and dirty.
For a previous electronics project I bought a power bank randomly. After a bit of testing, I determined that the minimal power draw was around 150mA, so I added a resistor to increase current draw. Only afterwards did I check the manual of the power bank and noticed, luckily, that there was a low-current mode. I removed the resistor and improved the battery life of the project considerably. If you want to power your DIY Arduino or electronics project, first check the manual of the power bank you want to use!
Edit: after further testing it seemed that the low current mode of this specific power bank still shuts down after a couple of hours. Your mileage may vary, and the main point of this post still holds: check the manual of your power bank. Eventually I went with a solution designed for electronics projects.
There is this thing that starts playing birdsong when it detects movement. It is ideal to connect to nature while nature calls. It is a good idea, executed well but it got me thinking: this can be made less reliable, more time consuming, more expensive, and with a shorter battery life. So I started working on a DIY version.
Vid: Playing birdsong when presence is detected with an ESP32 microcontroller .
The general idea is to start playing birdsong if someone is present in a necessary room. In addition to a few of electronics components the project needs birdsong recordings. Freesound is a great resource for all kinds of environmental sounds and has a collection of birdsong which was used for this project.
For the electronics components the project needs a microcontroller and a way to detect presence. I had a laser ranging sensor lying around which measures distance but can be repurposed to detect presence in a small room: most of the time, the distance to an opposite wall is reported. If a smaller distance is measured it is probably due to a person being present. The other components:
As is often the case with builds like this, neither the software nor the hardware is challenging conceptually but, making hard and software cooperate is. Some pitfalls I encountered: the ESP32 C6 needs USB CDC set in the Arduino IDE, the non standard I2C GPIO pins. Getting the many I2S parameters right. Dealing with a nasty pop sound once audio started. A broken LiPo battery. Most of the fixes can be found in the Arduino code
I use a polling strategy to detect presence. A distance measurement is taken and then the ESP32 goes into a deep sleep until the next measurement. A sensor with the ability to wake up the microcontroller would be a better approach.
Once everything was installed it worked well enough — motion triggered a random birdsong, creating a soothing, natural vibe. It may be less practical than the off-the-shelf version but I did learn quite a lot more than I would have by simply filling in a form and providing payment details…
A discussion at work led to the question how much time it takes for a HTTP request to pass through a HTTP proxy. This blog post deals with this question by measuring a request passing through a stupid amount of HTTP proxies.
Fig: Measuring the time it takes to pass 500 proxies with Curl.
In modern development setups it is not uncommon that your HTTP request passes a few HTTP proxies before reaching a final server that actually handles the request. In our case there is a proxy which ensures an SSL certificate, which is forwarded to a proxy which automatically forwards requests to a docker container. A final HTTP proxy runs in the docker network that forwards the request to a webserver. A response follows the same way in reverse.
Fig: Configuration to pass a HTTP request through many proxies. The final response is a simple text.
To measure the time it take to pass through a HTTP proxy, I wrote a small script to start 500 separate instances of the Caddy webserver configured as a HTTP/2 proxy. Then, I measure the time it takes to pass through all 500 of the HTTP proxies or only 490, 480,… which results in the graph below.
Fig: Time it takes to pass x amount of HTTP proxies. The y-axis represents the time taken (in seconds), and the x-axis indicates the number of HTTP proxies passed.
So each proxy pass takes about 0.4 milliseconds in one of the best cases, where requests are forwarded from and to localhost. Network overhead adds to that but assuming that interconnects are fast, adding a few HTTP proxies does not affect latency in a meaningful way. Of course it is best to evaluate your situation and measure.
In my house, I have a few smart home features: to control ventilation, to open and close solar screens, and to switch a few smart sockets. Up until a couple of days ago, the ventilation and screen controllers operated using custom software running on an ESP32. However, configuring, maintaining, upgrading, and integrating with this custom software gradually became a headache.
Recently, I switched from custom software to Tasmota, an open-source smart home platform targeting ESP32 devices. Tasmota includes a web UI, flexible configuration options, OTA upgrades, and scripting features. The scripting functionality allows devices to be extended with additional commands, which is especially practical for controlling my solar screens. These screens use pulses to toggle between up-stop-down-stop states. By default, Tasmota only supports enabling or disabling a relay, not enabling it for a very brief period (e.g., 150 milliseconds). With a short ‘Berry’ script, such functionality is quickly added.
I appreciate the effort of the Tasmota team to lower the entry barrier for users. They provide ample documentation and a web installer, making setup straightforward. Simply connect your ESP32 via USB, flash it with Tasmota, and configure it—all from your browser. It’s a surprisingly simple process compared to installing a dedicated toolchain. While this might not be what Tim Berners-Lee envisioned 35 years ago, it certainly simplifies the user experience. Lowering the entry barrier even further, some manufacturers even offer smart home devices with Tasmota preinstalled, such as the Nous A1 smart sockets. Eternal september is here.
If you’re managing custom ESP32 smart home devices, consider switching to Tasmota. Its robust features, ease of setup, and active community support make it an excellent choice for both beginners and advanced users.
The research day of the faculty of Arts and Philosophy of Ghent University took place last November. The theme of the day was ‘From Source to Understanding’ and the program gave an overview of the breadth of research at our faculty with topics as logic, history, archeology, chemistry, geography, language studies, … There were several contributions by our group: the Ghent Center for Digital Humanities. The contribution by me and my close colleagues was a poster about a reusable text annotation building block.
Fig: Poster on a text annotation component.
At GhentCDH we support several text annotation projects and have extracted a text annotation component for reuse. The abstract reads:
“Text annotation is essential for analyzing ancient texts, identifying entities in texts, or documenting evolving grammar. There is a need for reusable annotation methods which copes with challenges such as overlapping annotations, filtering annotation types, and enabling large-scale collaboration and computational analysis on text annotation work.
We present a reusable text annotation component built with TypeScript and Vue 3. It provides an intuitive interface for creating, visualizing, and editing annotations, it allows component users to enrich annotations with complex metadata, and facilitates flexible annotation filtering. This solution meets many needs of researchers in digital humanities and ancient language studies and will be used in several GhentCDH projects.”
Imagine you want to stream a movie at home but also want to keep things quiet to avoid disturbing others. Evidently, this what headsets were invented for. Connecting one wireless Bluetooth headset is typically straightforward - aside from the occasional Bluetooth pairing issues. But what if you want to watch that movie with someone else, and you both want to use headsets? Connecting two Bluetooth headsets, or even combining wired and wireless headsets to share the same audio, isn’t as simple as it sounds. This blog post shows how to achieve this on modern Linux distributions.
Fig: Connecting an audio source - Spotify - to multiple output devices by using audio routing with PipeWire and `qpwgraph`.
During the last years, several Linux distributions have started to support the PipeWire audio server. It is even the default audio server in Debian 12 and Ubuntu 22.10. With PipeWire, managing audio devices has become much easier. PipeWire enables flexible audio setups and supports audio routing: sending out audio from a single source to several output devices. This is exactly what we need to stream audio to multiple headsets.
If you use PipeWire on your system, qpwgraph provides an intuitive graphical interface that lets you visualize and control audio routing. To connect multiple headsets:
First install qpwgraph e.g via apt install qpwgraph
Startup qpwgraph which should show your current audio routing graph.
Pair your Bluetooth headsets to your machine. They will appear in the audio routing graph once paired successfully.
Connect the audio source to your headsets by connecting ‘wires’ from your media player to the headsets.
I was surprised how robust audio has become on Linux and how easy and user friendly it is to set up even more complex audio / MIDI configurations. Give it a try!