0110.be logo

~ Updates for Panako - an acoustic fingerprinting system

Panako is an acoustic fingerprinting system I developed a couple of years ago. With acoustic fingerprinting systems it is possible to find duplicates in digital music archives and compare meta-data or identify unlabelled audio fragments. In the margins of my post-doc project working with large music archives, I have found the time to update Panako significantly. The updates simplify, improve and speed up Panako.

General content based audio search scheme
Fig. General content based audio search scheme.

The main algorithms are simplified. There is also a reduction of dependencies and a refocus to core functionality. This also simplifies building the software. The retrieval characteristics are improved, mainly thanks to the use of a fine-grained Gabor transform. Also new is the near-exact hashing construct which helps with off-by-one issues when matching time bins. The key-value store used is now LMDB, which speeds up the query performance of Panako significantly. The updates should make Panako stand the test of time somewhat better.

A more complete list of updates can be found below and on the Panako GitHub repository:

  • The number of dependencies has been drastically cut by removing support for multiple key-value stores.
  • The key-value store has been changed to a faster and simpler system (from MapDB to LMDB).
  • The SyncSink functionality has been moved to another project (with Panako as dependency).
  • The main algorithms have been replaced with simpler and better working versions:
    • Olaf is a new implementation of the classic Shazam algorithm.
    • The algoritm described in the Panako paper was also replaced. The core ideas are still the same. The main change is the use of a Gabor transform to go from time domain to the spectral domain (previously a constant-q transform was used). The gabor transform is implemented by JGaborator which in turn relies on The Gaborator C++ library via JNI.
  • Folder structure has been simplified.
  • The UI which was mainly used for debugging has been removed.
  • A new set of helper scripts are added in the scripts directory. They help with evaluation, parsing results, checking results, building panako, creating documentation,…
  • Changed the default panako location to ~/.panako, so users can install and use panako more easily (without need for sudo rights)

An interactive CLI session with Panako
Fig: An interactive CLI session with Panako.


~ SyncSink - Synchronize media by aligning audio

I have just released a new version of SyncSink. SyncSink is a tool to synchronize media files with shared audio. It is ideal to synchronize video captured by multiple cameras or audio captured by many microphones. It finds a rough alignment between audio captured from the same event and subsequently refines that offset with a crosscorrelation step. Below you can see SyncSink in action or you can try out SyncSink (you will need ffmpeg and Java installed on your system).

SyncSink used to be part of the Panako acoustic fingerprinting system but I decided that it was better to keep the Panako package focused and made a separate repository for SyncSink. More information can be found at the SyncSink GiHub repo

SyncSink is a tool to synchronize media files with shared audio. SyncSink matches and aligns shared audio and determines offsets in seconds. With these precise offsets it becomes trivial to sync files. SyncSink is, for example, used to synchronize video files: when you have many video captures of the same event, the audio attached to these video captures is used to align and sync multiple (independently operated) cameras.

Evidently, SyncSink can also synchronize audio captured from many (independent) microphones if some environmental sound is shared (leaked in) the each recording.


Fig: SyncSink in action: syncing some audio files


~ Calling JNI code from multiple Java threads: sharing state

1

1
Java Threads
Java Threads
C++ states
C++ states
2
2
1
1
2
2
3
3
3
3
JNI Bridge

Mapping Java threads to C++ states in a JNI bridge

This post deals with the problem of using stateful C++ code from multiple Java threads. With JNI (Java Native Interface) it is possible to glue C++ code to a Java environment. There are many helpful tutorials on how to call C++ code and receive results. JNI helps to reuse existing, often highly complex and computationally expensive, C++ code.

The introductory tutorials often stop once it is made clear how to repackage (simple) datatypes and do not mention threads. It is, however, reasonable to expect JNI code to take into account thread-safety and proper multi-threading. In all but the simplest cases it is not that straightforward to share state at the C++ side and allow JNI code to be called from multiple Java threads. Incorrectly sharing state can lead to memory leaks and segmentation faults (segfaults) and crashes the application. In what follows, a way to share thread-local state is presented.

It is quite common to have an init, work and dispose method to create a state, use that state and do some work and finally dispose of used resources. Each Java thread independently calls these methods and expects results. These results should not change if multiple Java threads are calling the same methods. In other words: the state should remain Java thread-local. A typical Java class could look like the code below.

With the Java code in mind, the C++ code should know which Java thread is used and which state needs to be used for the work. Luckily there is a way to find out: The JNI specification states that each JNIEnv is local to a Java thread. So we can use the JNIEnv pointer to identify a thread. This is the idea that is used below.

The code maps a JNIEnv pointer to a structure with (any) state information. An unordered map is used for this mapping. There is, however, still a problem: multiple threads can call the init method at once. So multiple threads potentially write to the unordered_map at the same time which leads to problems. To prevent this from happening a mutex is used. The mutex, together with a unique lock, makes sure that only a single thread writes to the unordered map. The same holds for the dispose method.

The work method does not need a unique lock since it does not write to the unordered map and reading from multiple threads is no problem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
#include <unordered_map>
#include <mutex>

const int DATA_ARRAY_SIZE = 300000 * 2;

struct BridgeState{
 jfloat *data;
}

//A hash map with a JNIEnv * as key and a BridgeState * as value
std::unordered_map<uintptr_t, uintptr_t> stateMap;

//A mutex to ensure that writes to the stateMap are synchronized.
std::mutex stateMutex;

JNIEXPORT jint JNICALL Java_init(JNIEnv * env, jobject object){
  //Makes sure only one thread writes to the stateMap
  std::unique_lock<std::mutex> lck (stateMutex);

  BridgeState * state =  new BridgeState();
  uintptr_t env_addresss = reinterpret_cast<uintptr_t>(env);

  state->cArray = new jfloat[DATA_ARRAY_SIZE];
 
  uintptr_t state_addresss = reinterpret_cast<uintptr_t>(state);
  stateMap[env_addresss] = state_addresss;
  
  return 1;
}

JNIEXPORT jint JNICALL Java_work(JNIEnv * env, jobject object){
  //get a ref to the state pointer
  uintptr_t env_addresss = reinterpret_cast<uintptr_t>(env);
  BridgeState * state = reinterpret_cast<BridgeState *>(stateMap[env_addresss]);

  //do something with state->data, e.g. calculate the sum
  int sum = 0;
  for(int i = 0 ; i < DATA_ARRAY_SIZE ; i++){
    state->data[i] = state->data[i] + 1;
    sum += (int) state->data[i];
  }
  return sum; 
}

JNIEXPORT jint JNICALL Java_dispose(JNIEnv * env, jobject object){
  //Makes sure only one thread writes to the stateMap
  std::unique_lock<std::mutex> lck (stateMutex);

  uintptr_t env_addresss = reinterpret_cast<uintptr_t>(env);
  BridgeState * state = reinterpret_cast<BridgeState *>(stateMap[env_addresss]);
  stateMap.erase(env_addresss); 

  //cleanup memory
  delete [] state->data;
  delete state;
  return 0
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public class Bridge{
        
  //Load a native library
  static {
    try {
      System.loadLibrary("bridge"); 
    catch (UnsatisfiedLinkError e){
      e.printStackTrace();
    }
  }
  
  private native int init();

  private native int work();

  private native int dispose();

  public static void main (String[] args){
    //start work on 20 threads
    for(int i = 0 ; i<20 ; i++){
      Thread.new(new Runnable() {
        @Override
        public void run() {
          Bridge b = new Bridge();
          b.init();
          b.work();
          b.dispose();
      }).start();
    }
  }
}

This conceptual code has been lifted from a JNI library doing actual work: The JGaborator JNI bridge . If you need more information on how to compile and use this construct in actual code, please have a look at the JGaborator GitHub repository


~ JGaborator Updated - Fine grained spectral transforms from Java

I have updated the JGaborator library. The library calculates fine grained constant-Q spectral representations of audio signals quickly from Java. Such spectral transform can be used for visualisation or as a front-end for audio processing or music information retrieval applications.

The calculation of a Gabor transform is done by a C++ library named Gaborator. JGaborator provides a Java native interface (JNI) bridge to that library. Thanks to the recent updates, the library is now automatically unpacked which makes it easy to use on supported platforms (intel macOS and x64 Linux).

The new version of JGaborator now also allows multiple Java threads to call the transform. This has the potential to speed up some audio processing chains dramatically.

The visualisation parts of JGaborator also received light touch-ups. Below a number of screenshots can be seen with of spectral representations of several audio files. If you want to try it yourself download the JGaborator JAR-file. Note that it should work only on intel macOS and x64 Linux with ffmpeg installed on your path. For other environments, please read and follow the JGaborator instructions to get it working.


~ Music-based biofeedback to reduce tibial shock in over-ground running: a proof-of-concept study


For the last couple of years there has been a fruitful collaboration ongoing between the systematic musicology (IPEM) and sports-science departments at Ghent University. IPEM has a rich history of fundamental research on the link between movement and music. In a newly published proof-of-concept study the music-movement link improves running style. The runner is equipped with a musical biofeedback system to lower foot-impact. For more details, see:

Music-based biofeedback to reduce tibial shock in over-ground running: a proof-of-concept study, published in Scientific Reports(2021) by Van den Berghe, P., Lorenzoni, V., Derie, R. et al.

Abstract Methods to reduce impact in distance runners have been proposed based on real-time auditory feedback of tibial acceleration. These methods were developed using treadmill running. In this study, we extend these methods to a more natural environment with a proof-of-concept. We selected ten runners with high tibial shock. They used a music-based biofeedback system with headphones in a running session on an athletic track. The feedback consisted of music superimposed with noise coupled to tibial shock. The music was automatically synchronized to the running cadence. The level of noise could be reduced by reducing the momentary level of tibial shock, thereby providing a more pleasant listening experience. The running speed was controlled between the condition without biofeedback and the condition of biofeedback. The results show that tibial shock decreased by 27% or 2.96 g without guided instructions on gait modification in the biofeedback condition. The reduction in tibial shock did not result in a clear increase in the running cadence. The results indicate that a wearable biofeedback system aids in shock reduction during over-ground running. This paves the way to evaluate and retrain runners in over-ground running programs that target running with less impact through instantaneous auditory feedback on tibial shock.


~ ISMIR 2020 - Virtual Conference

ISMIR 2020 Logo

From 11-16 October 2020 the latest instalment of the ISMIR conference series was held. Due to the pandemic, the 21st ISMIR conference was the first virtual one. As usual, participants and presenters from around the world joined the conference. For the first time, however, not all participants synchronised their circadian rhythm. By repeating most events with 12h in between, the organisers managed to put together a schedule befitting nearly all participants.

The virtual format had some clear advantages: travel was not needed, so (environmental) cost was low. Attendance fees were lower than usual since no spaces or catering was needed. This democratised the conference experience and attendance reached a record high.

The scientific program of the conference was impressive and varied. It is At the conferences Late Breaking/Demo session I presented Olaf: Overly Lightweight Acoustic Fingerprinting.


~ PaPiOM: Patterns in Pitch Organization in Music

Form the 1st of October 2020 I will start on a new research project. The BOF fund of Ghent University is kind enough to sponsor the project for three years. The abstract is as follows:

Music is present in every culture in the world. We as a species seem to have an urge to make music. While the diversity of music cultures around the world is phenomenal, they do seem to have patterns in common. Especially for pitch, one of the fundamental building blocks of music, there are strong reasons to believe that there are commonalities amongst cultures on how pitch is organised A better insight in these common patterns may help to answer questions on the definition, origins and evolution of music.
Common patterns in pitch organisation can be studied from two perspectives. Firstly, the perspective of how humans perceive and make music can be gained from systematic, experimental work. Over the years this has yielded insights in which pitch organisations might be most fit for our perceptual, neurophysiological system. Secondly, these patterns can be observed directly in large-scale, corpus-based, cross-cultural studies which has a potential that is not exploited as of yet.
During this fellowship a large-scale global corpus with field recordings will be compiled in collaboration. Music Information Retrieval techniques will be employed to describe how pitch is organised in the corpus. More specifically, it will support claims on the use of discrete pitches, octave equivalence, the number of pitch classes in use and the pitch interval structures. The uncovered fundamental properties of pitch will be confronted with findings from experimental work.

Recently I presented the outline of the project with the following slides:


~ Olaf - Acoustic fingerprinting on the ESP32 and in the Browser

Recognition of music. A good year ago I was asked to develop audio recognition technology for an e-costume. The idea was that lights in the costume would follow a sequence synchronised to a certain song. Only a single song should trigger the lights, all other music should be ignored. Recognition of music and synchronisation is typically done using audio fingerprinting techniques. The challenge was that the recognition needed to run on a cheap, battery-powered microcontroller with limited CPU and memory. I delivered a prototype but eventually a cheap, battle-tested, off-the-shelf, IP-cleared, alternative was found.

The prototype gathered dust for a while but the idea stuck in my head. With my daughters fourth birthday approaching during the lockdown, I decided to turn the prototype into an over-engineered birthday gift and let an ‘Elsa-dress’ react to ‘Let It Go’ from the Frozen soundtrack. With the prototype as a starting point, I ordered an RGB-LED-strip, a beefy Li-Ion Battery, an I2S digital microphone and, of course, an Elsa-dress.

I had an ESP32 microcontroller laying around and used it as the core of the system: it supports I²S, has a floating point unit (FPU), is easy to use together with LED strips and has enough memory. The FPU makes it straightforward to use the same code on traditional computers as on embedded devices: fixed-point math can be avoided.

After soldering the components together and with the help from my better half to sew in the LED strip, it all came together. In the video below, the result of our work can be seen. The video first shows a song that should not and is not recognised. Then, “Let It Go” is played and correctly recognised. After the song is stopped, the lights go on for a while and finally stop: this is by design to allow gaps in recognition. Lastly, the song is continued and again correctly recognised.

With my limited C experience the prototype code was not well organised. During my second attempt this improved enough so that I feel comfortable enough to share the code on GitHub: Olaf – Overly Lightweight Acoustic Fingerprinting.

The code went through several iterations and was expanded beyond the original scope and became a capable general purpose acoustic fingerprinting system with its many applications. Olaf performs quite well thanks to its resource friendly design and the use of PFFT and LMDB. Especially LMDB, a fast, B+-tree backed key value store with low storage overhead enables performant storage and lookups.

The GitHub does not contain an example for the ESP32. That code depends on the microcontroller, digital microphone and pins used and Olaf needs to be hacked to exhibit the requested behaviour. All in all that code is much less reusable (and sharable, testable, maintainable). I have, however, included a platformIO project for Olaf on ESP32 for reference.

WASM: Olaf in the browser

Olaf, being written in ANSI C, can run in the browser thanks to the Emscripten compiler. According to its website, Emscripten ‘…lets you run C and C++ on the web at near-native speed without plugins’ Combining the Web Audio API and the WASM version of Olaf makes web-based acoustic fingerprinting applications possible.

Below you can try out Olaf. The exact same code is running on your browser as on the ESP32 demonstrated above. This means that Olaf is listening to recognise ‘Let It Go’ from the Frozen soundtrack. For your convenience the song can be started below on the left. On the right, you can start Olaf by allowing incoming audio to be analysed. The FFT is calculated by Olaf and visualised using Pixi.js. After a few seconds the red fingerprints should become green, indicating a match. Once you stop the song, the fingerprints will eventually turn red again. As with the video above: going from a match to no match takes a couple of seconds to allow gaps in recognition.

1. Start the song and play it aloud. Singing along is encouraged. 2. Start the microphone and check whether recognition succeeds.

Olaf was featured on hackaday. There is also a small discussion about Olaf on Hacker News. A write-up of this project also ended up as a contribution to the Late Breaking Demo track of the first virtual ISMIR conference: Olaf ISMIR 2020 LBD abstract.




~ LTC - SMPTE Decoder on Teensy

Teensy with audio shield

For synchronisation between several devices SMPTE timecode data is often encoded into audio using LTC or linear time code.

This blog post presents an LTC decoder for a Teensy 3.2 microcontroller with audio shield.

The audio shield takes care of the line level audio input. This audio input is then decoded. The decoding is done by libltc. The library runs as is on a Teensy without modification. The three elements are combined in a relatively simple teensy patch

To use the decoder connect the line level input left channel to an SMPTE source via e.g. an RCA plug.

For code, comments, pull requests please consult the Github repository for the Teensy SMPTE LTC decoder

A teensy decoding an LTC SMPTE signal



~ MIDImorphosis: recording audio and sensor data

During an experiment which monitors a music performance it might be a requirement to record music, video and sensor data synchronously. Recording analog sensors (balance boards, accelerometers, light sensors, distance sensors) together with audio and video is often problematic. Ideally standard DAW software can be used to record both audio and sensor data. A system is presented here that makes it relatively straightforward to record sensor data together with audio/video.

The basic idea is simple: a microcontroller is programmed to appear as a class compliant MIDI device. Analog measurements on the micro-controller are translated to a specific MIDI protocol. The MIDI data, on the capturing side, can then be converted again into the original sensor data. This setup has several advantages:



screenshot of signal visualization
Fig: Visualization in html of analog sensor data, captured as MIDI


While the concept is relatively simple, there are many details to get right. Please consult the MIDImorphosis github page which details the system that consists of an analog sensor, a MIDI protocol and a clocking infrastructure.



~ LW Research Day 2019 on Digital Humanities

On the 9th of September 2019 the second research day organized by the faculty of Arts and Philosophy of Ghent University took place. The theme of the day was ‘Digital Humanities’ and the program gave an overview of the breadth of research at our faculty with topics as logic, history, archeology, chemistry, geography

Together with Jeska, I presented an ongoing study on musical interaction. In the study one of the measurements was the body movement of two participants. This is done with boards that are equipped with weight sensors. The data that comes out of this can be inspected for synchronisation, quality and quantity of movement, movement periodicities.


The hardware is the work of Ivan Schepers, the software used to capture and transmit messages is called “the MIDImorphosis” and developed by me. The research is in collaboration with Jeska Buhman, Marc Leman and Alessandro Dell’Anna. An article with detailed findings is forthcoming.


~ AAWM/FMA 2019 - Birmingham

I am currently in Birmingham, UK at the 2019 at the joint Analytical Approaches to World Music (AAWM) and Folk Music Conference. The opening concert by the RBC folk ensemble already provided the most lively and enthusiastic conference opening probably ever. Especially considering the early morning hour (9.30). At the conference, two studies will be presented on which I collaborated:

Automatic comparison of human music, speech, and bird song suggests uniqueness of human scales

Automatic comparison of human music, speech, and bird song suggests uniqueness of human scales by Jiei Kuroyanagi, Shoichiro Sato, Meng-Jou Ho, Gakuto Chiba, Joren Six, Peter Pfordresher, Adam Tierney, Shinya Fujii and Patrick Savage

The uniqueness of human music relative to speech and animal song has been extensively debated, but rarely directly measured. We applied an automated scale analysis algorithm to a sample of 86 recordings of human music, human speech, and bird songs from around the world. We found that human music throughout the world uniquely emphasized scales with small-integer frequency ratios, particularly a perfect 5th (3:2 ratio), while human speech and bird song showed no clear evidence of consistent scale-like tunings. We speculate that the uniquely human tendency toward scales with small-integer ratios may relate to the evolution of synchronized group performance among humans.

Automatic comparison of global children’s and adult songs

Automatic comparison of global children’s and adult songs by Shoichiro Sato, Joren Six, Peter Pfordresher, Shinya Fujii and Patrick Savage

Music throughout the world varies greatly, yet some musical features like scale structure display striking crosscultural similarities. Are there musical laws or biological constraints that underlie this diversity? The “vocal mistuning” hypothesis proposes that cross-cultural regularities in musical scales arise from imprecision in vocal tuning, while the integer-ratio hypothesis proposes that they arise from perceptual principles based on psychoacoustic consonance. In order to test these hypotheses, we conducted automatic comparative analysis of 100 children’s and adult songs from throughout the world. We found that children’s songs tend to have narrower melodic range, fewer scale degrees, and less precise intonation than adult songs, consistent with motor limitations due to their earlier developmental stage. On the other hand, adult and children’s songs share some common tuning intervals at small-integer ratios, particularly the perfect 5th (~3:2 ratio). These results suggest that some widespread aspects of musical scales may be caused by motor constraints, but also suggest that perceptual preferences for simple integer ratios might contribute to cross-cultural regularities in scale structure. We propose a “sensorimotor hypothesis” to unify these competing theories.


~ trix: Realtime audio over IP

At work we have a really nice piano and I wanted to be able to broadcast a live performance over the internet with low latency to potential live listeners. In all honesty, only my significant other gets moderately lukewarm about the idea of hearing me play live. Anyhow:

I did not find any practical tool to easily pump audio over the internet. I did find something that was very close called trx by Mark Hills: trx is a simple toolset for broadcasting live audio from Linux. It unfortunately only works with the ALSA audio system and is limited to Linux. I decided to extend it to support macOS and Pulse Audio. I also extended its name to form trix.

Audio Transmitter/Receiver over Ip eXchange (trix) is a simple toolset for broadcasting live audio from Linux or macOS. It sends and receives encoded audio over IP networks, via an audio interface. If audio interfaces are properly configured, a low-latency point-to-point or multicast broadband audio connection can be achieved. This could be used for networked music performances. The inclusion of the intermediate rtAudio library provides support for various audio input and outputs.

More information on trix can be found on the trix github page.

Latency

The system can be configured for low latency use. The whole chain is dependent several different components which each add to the total latency: audio input latency, encoder (algorithmic) delay, network latency and finally audio output latency.

Thanks to the use of RtAudio it should be possible to use low latency API’s to access audio devices (ASIO on windows or Jack on Unix). This means that audio input and output latencies can be as low as the hardware allows. The opus encoder/decoder that is used has a low algorithmic delay. By default it has a 25ms delay but it can be configured to only 2.5ms (see here). The network latency (and jitter) is very much dependent on the distance to cover. On a local network this can be kept low, when using wide area networks (the internet) control is lost and latencies can add up depending on the number of hops to take. Jitter can be problematic if the smallest possible buffers are used: then dropouts might occur and this might affect the audio in a noticeable way.


~ Audio marker finder

I have uploaded a small piece of software which allows users to find a specific audio marker in audio streams. It is mainly practical to synchronise a camera (audio/video) recording with other audio with the same marker. The marker is a set of three beeps. These three beeps are found with millisecond accurate precision within the audio streams under analysis. By comparing the timing of marker synchronization becomes possible. It can be regarded as an alternative for the movie clapper boards.

Screenshot of the Audio marker finder

The source code for the audio marker finder is on GitHub. The software is used in the Art Science Interaction Lab of the Krook. Below you can download the Audio marker finder and the marker itself.


~ Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds - Journal of Biomechanics

With the goal in mind to reduce common runner injuries we first need to measure some running style characteristics. Therefore, we have developed a sensor to measure how hard a runners foot repeatedly hits the ground. This sensor has been compared with laboratory equipment which proofs that its measurements are valid and can be repeated. The main advantages of our sensor is that it can be used ‘in the wild’, outside the lab on the runners regular tours. We want to use this sensor to provide real-time biofeedback in order to change running style and ultimately reduce injury risk.

We have published an article on this sensor in the journal of Biomechanics:
Pieter Van den Berghe, Joren Six, Joeri Gerlo, Marc Leman, Dirk De Clercq,
Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds, (author version)
Journal of Biomechanics,
2019

Studies seeking to determine the effects of gait retraining through biofeedback on peak tibial acceleration (PTA) assume that this biometric trait is a valid measure of impact loading that is reliable both within and between sessions. However, reliability and validity data were lacking for axial and resultant PTAs along the speed range of over-ground endurance running. A wearable system was developed to continuously measure 3D tibial accelerations and to detect PTAs in real-time. Thirteen rearfoot runners ran at 2.55, 3.20 and 5.10 m*s-1 over an instrumented runway in two sessions with re-attachment of the system. Intraclass correlation coefficients (ICCs) were used to determine within-session reliability. Repeatability was evaluated by paired T-tests and ICCs. Concerning validity, axial and resultant PTAs were correlated to the peak vertical impact loading rate (LR) of the ground reaction force. Additionally, speed should affect impact loading magnitude. Hence, magnitudes were compared across speeds by RM-ANOVA. Within a session, ICCs were over 0.90 and reasonable for clinical measurements. Between sessions, the magnitudes remained statistically similar with ICCs ranging from 0.50 to 0.59 for axial PTA and from 0.53 to 0.81 for resultant PTA. Peak accelerations of the lower leg segment correlated to LR with larger coefficients for axial PTA (r range: 0.64–0.84) than for the resultant PTA per speed condition. The magnitude of each impact measure increased with speed. These data suggest that PTAs registered per stand-alone system can be useful during level, over-ground rearfoot running to evaluate impact loading in the time domain when force platforms are unavailable in studies with repeated measurements.


~ Nano4Sports in Team Scheire

‘Team Scheire’ is a Flemish TV program with a similar concept as BBC Two’s ‘The Big Life Fix’. In the program, makers create ingenious new solutions to everyday problems and build life-changing solutions for people in desperate need.

One of the cases is Ben. Ben loves to run but has a recurring running related injury. To monitor Ben’s running and determine a maximum training length a sensor was developed that measures the impact and the amount of steps taken. The program makers were interested in the results of the Nano4Sports project at UGent. One of the aims of that project is to build those type of sensors and knowhow related to correct interpretation of data and use of such devices. Below a video with some background information can be found:

The solution build for the program is documented in a Github Repository One of the scientific results of the Nano4Sports project can be found in an article for the Journal of Biomechanics titled Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds.


~ ISMIR 2018 Conference - Automatic Analysis Of Global Music Recordings suggests Scale Tuning Universals

Thanks to the support of a travel grant by the faculty of Arts and Philosophy of Ghent University I was able to attend the ISMIR 2018 conference. A conference on Music Information Retrieval. I am co author on a contribution for the the Late-Breaking / Demos session

The structure of musical scales has been proposed to reflect universal acoustic principles based on simple integer ratios. However, some studying tuning in small samples of non-Western cultures have argued that such ratios are not universal but specific to Western music. To address this debate, we applied an algorithm that could automatically analyze and cross-culturally compare scale tunings to a global sample of 50 music recordings, including both instrumental and vocal pieces. Although we found great cross-cultural diversity in most scale degrees, these preliminary results also suggest a strong tendency to include the simplest possible integer ratio within the octave (perfect fifth, 3:2 ratio, ~700 cents) in both Western and non-Western cultures. This suggests that cultural diversity in musical scales is not without limit, but is constrained by universal psycho-acoustic principles that may shed light on the evolution of human music.


~ JGaborator - Fast Gabor spectral transforms in Java

Recently I have published a small library on github called JGaborator. The library calculates fine grained constant-Q spectral representations of audio signals quickly from Java. The calculation of a Gabor transform is done by a C++ library named Gaborator. A Java native interface (JNI) bridge to the C++ Gaborator is provided. A combination of Gaborator and a fast FFT library (such as pfft) allows fine grained constant-Q transforms at a rate of about 200 times real-time on moderate hardware. It can serve as a front-end for several audio processing or MIR applications.

For more information on the Gaborator C++ library by Andreas Gustafsson, please see the gaborator.com website or a talk by the author on the library called Exploring time-frequency space with the Gaborator

While the gaborator allows reversible transforms, only a forward transform (from time domain to the spectral domain) is currently supported from Java.A spectral visualization tool for sprectral information is part of this package. See below for a screenshot:

JGaborator screenshot


~ TISMIR journal article - A Case for Reproducibility in MIR: Replication of ‘A Highly Robust Audio Fingerprinting System’

As an extension of the ISMIR conferences the International Society for Music Information Retrievel started a new journal: TISMIR. The first issue contains an article of mine:
A Case for Reproducibility in MIR: Replication of ‘A Highly Robust Audio Fingerprinting System’. The abstract can be read here:

Claims made in many Music Information Retrieval (MIR) publications are hard to verify due to the fact that (i) often only a textual description is made available and code remains unpublished – leaving many implementation issues uncovered; (ii) copyrights on music limit the sharing of datasets; and (iii) incentives to put effort into reproducible research – publishing and documenting code and specifics on data – is lacking. In this article the problems around reproducibility are illustrated by replicating an MIR work. The system and evaluation described in ‘A Highly Robust Audio Fingerprinting System’ is replicated as closely as possible. The replication is done with several goals in mind: to describe difficulties in replicating the work and subsequently reflect on guidelines around reproducible research. Added contributions are the verification of the reported work, a publicly available implementation and an evaluation method that is reproducible.


~ JNMR article - Beyond documentation – The digital philology of interaction heritage

Marc Leman and myself have recently published an article in the Journal of New Music Research for a special issue on Digital Philology for Multimedia Cultural Heritage. Our contribution is titled Beyond documentation – The digital philology of interaction heritage

A philologist’s approach to heritage is traditionally based on the curation of documents, such as text, audio and video. However, with the advent of interactive multimedia, heritage becomes floating and volatile, and not easily captured in documents. We propose an approach to heritage that goes beyond documents. We consider the crucial role of institutes for interactive multimedia (as motor of a living culture of interaction) and propose that the digital philologist’s task will be to promote the collective/shared responsibility of (interactive) documenting, engage engineering in developing interactive approaches to heritage, and keep interaction-heritage alive through the education of citizens.