0110.be logo

~ Audio marker finder

I have uploaded a small piece of software which allows users to find a specific audio marker in audio streams. It is mainly practical to synchronise a camera (audio/video) recording with other audio with the same marker. The marker is a set of three beeps. These three beeps are found with millisecond accurate precision within the audio streams under analysis. By comparing the timing of marker synchronization becomes possible. It can be regarded as an alternative for the movie clapper boards.

Screenshot of the Audio marker finder

The source code for the audio marker finder is on GitHub. The software is used in the Art Science Interaction Lab of the Krook. Below you can download the Audio marker finder and the marker itself.


~ Nano4Sports in Team Scheire

‘Team Scheire’ is a Flemish TV program with a similar concept as BBC Two’s ‘The Big Life Fix’. In the program, makers create ingenious new solutions to everyday problems and build life-changing solutions for people in desperate need.

One of the cases is Ben. Ben loves to run but has a recurring running related injury. To monitor Ben’s running and determine a maximum training length a sensor was developed that measures the impact and the amount of steps taken. The program makers were interested in the results of the Nano4Sports project at UGent. One of the aims of that project is to build those type of sensors and knowhow related to correct interpretation of data and use of such devices. Below a video with some background information can be found:

The solution build for the program is documented in a Github Repository One of the scientific results of the Nano4Sports project can be found in an article for the Journal of Biomechanics titled Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds.


~ Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds - Journal of Biomechanics

With the goal in mind to reduce common runner injuries we first need to measure some running style characteristics. Therefore, we have developed a sensor to measure how hard a runners foot repeatedly hits the ground. This sensor has been compared with laboratory equipment which proofs that its measurements are valid and can be repeated. The main advantages of our sensor is that it can be used ‘in the wild’, outside the lab on the runners regular tours. We want to use this sensor to provide real-time biofeedback in order to change running style and ultimately reduce injury risk.

We have published an article on this sensor in the journal of Biomechanics:
Pieter Van den Berghe, Joren Six, Joeri Gerlo, Marc Leman, Dirk De Clercq,
Validity and reliability of peak tibial accelerations as real-time measure of impact loading during over-ground rearfoot running at different speeds, (author version)
Journal of Biomechanics,
2019

Studies seeking to determine the effects of gait retraining through biofeedback on peak tibial acceleration (PTA) assume that this biometric trait is a valid measure of impact loading that is reliable both within and between sessions. However, reliability and validity data were lacking for axial and resultant PTAs along the speed range of over-ground endurance running. A wearable system was developed to continuously measure 3D tibial accelerations and to detect PTAs in real-time. Thirteen rearfoot runners ran at 2.55, 3.20 and 5.10 m*s-1 over an instrumented runway in two sessions with re-attachment of the system. Intraclass correlation coefficients (ICCs) were used to determine within-session reliability. Repeatability was evaluated by paired T-tests and ICCs. Concerning validity, axial and resultant PTAs were correlated to the peak vertical impact loading rate (LR) of the ground reaction force. Additionally, speed should affect impact loading magnitude. Hence, magnitudes were compared across speeds by RM-ANOVA. Within a session, ICCs were over 0.90 and reasonable for clinical measurements. Between sessions, the magnitudes remained statistically similar with ICCs ranging from 0.50 to 0.59 for axial PTA and from 0.53 to 0.81 for resultant PTA. Peak accelerations of the lower leg segment correlated to LR with larger coefficients for axial PTA (r range: 0.64–0.84) than for the resultant PTA per speed condition. The magnitude of each impact measure increased with speed. These data suggest that PTAs registered per stand-alone system can be useful during level, over-ground rearfoot running to evaluate impact loading in the time domain when force platforms are unavailable in studies with repeated measurements.


~ ISMIR 2018 Conference - Automatic Analysis Of Global Music Recordings suggests Scale Tuning Universals

Thanks to the support of a travel grant by the faculty of Arts and Philosophy of Ghent University I was able to attend the ISMIR 2018 conference. A conference on Music Information Retrieval. I am co author on a contribution for the the Late-Breaking / Demos session

The structure of musical scales has been proposed to reflect universal acoustic principles based on simple integer ratios. However, some studying tuning in small samples of non-Western cultures have argued that such ratios are not universal but specific to Western music. To address this debate, we applied an algorithm that could automatically analyze and cross-culturally compare scale tunings to a global sample of 50 music recordings, including both instrumental and vocal pieces. Although we found great cross-cultural diversity in most scale degrees, these preliminary results also suggest a strong tendency to include the simplest possible integer ratio within the octave (perfect fifth, 3:2 ratio, ~700 cents) in both Western and non-Western cultures. This suggests that cultural diversity in musical scales is not without limit, but is constrained by universal psycho-acoustic principles that may shed light on the evolution of human music.


~ JGaborator - Fast Gabor spectral transforms in Java

Recently I have published a small library on github called JGaborator. The library calculates fine grained constant-Q spectral representations of audio signals quickly from Java. The calculation of a Gabor transform is done by a C++ library named Gaborator. A Java native interface (JNI) bridge to the C++ Gaborator is provided. A combination of Gaborator and a fast FFT library (such as pfft) allows fine grained constant-Q transforms at a rate of about 200 times real-time on moderate hardware. It can serve as a front-end for several audio processing or MIR applications.

For more information on the Gaborator C++ library by Andreas Gustafsson, please see the gaborator.com website or a talk by the author on the library called Exploring time-frequency space with the Gaborator

While the gaborator allows reversible transforms, only a forward transform (from time domain to the spectral domain) is currently supported from Java.A spectral visualization tool for sprectral information is part of this package. See below for a screenshot:

JGaborator screenshot


~ TISMIR journal article - A Case for Reproducibility in MIR: Replication of ‘A Highly Robust Audio Fingerprinting System’

As an extension of the ISMIR conferences the International Society for Music Information Retrievel started a new journal: TISMIR. The first issue contains an article of mine:
A Case for Reproducibility in MIR: Replication of ‘A Highly Robust Audio Fingerprinting System’. The abstract can be read here:

Claims made in many Music Information Retrieval (MIR) publications are hard to verify due to the fact that (i) often only a textual description is made available and code remains unpublished – leaving many implementation issues uncovered; (ii) copyrights on music limit the sharing of datasets; and (iii) incentives to put effort into reproducible research – publishing and documenting code and specifics on data – is lacking. In this article the problems around reproducibility are illustrated by replicating an MIR work. The system and evaluation described in ‘A Highly Robust Audio Fingerprinting System’ is replicated as closely as possible. The replication is done with several goals in mind: to describe difficulties in replicating the work and subsequently reflect on guidelines around reproducible research. Added contributions are the verification of the reported work, a publicly available implementation and an evaluation method that is reproducible.


~ JNMR article - Beyond documentation – The digital philology of interaction heritage

Marc Leman and myself have recently published an article in the Journal of New Music Research for a special issue on Digital Philology for Multimedia Cultural Heritage. Our contribution is titled Beyond documentation – The digital philology of interaction heritage

A philologist’s approach to heritage is traditionally based on the curation of documents, such as text, audio and video. However, with the advent of interactive multimedia, heritage becomes floating and volatile, and not easily captured in documents. We propose an approach to heritage that goes beyond documents. We consider the crucial role of institutes for interactive multimedia (as motor of a living culture of interaction) and propose that the digital philologist’s task will be to promote the collective/shared responsibility of (interactive) documenting, engage engineering in developing interactive approaches to heritage, and keep interaction-heritage alive through the education of citizens.


~ MIR Meetup Berlin - Acoustic Fingerprinting in Research

I was kindly invited by SoundCloud to give a presentation on “Acoustic fingerprinting in research”. The presentation took place during one of the “MIR Meetups” in Berlin on Monday, April 23, 2018. Before my presentation there was a presentation by Derek and Josh (both SoundCloud engineers) detailing the state of the internal fingerprinting system of SoundCloud.

During my presentation I gave an overview of various applications of acoustic fingerprinting in a music research environment and detailed how these applications can be handled and are implemented in Panako: an open source fingerprinting system

Below the slides used during the presentation can be found:


~ Engineering systematic musicology

The 11th of January I successfully completed my PhD training under mentorship of Marc Leman with a public defense at de Krook in Ghent.

I also handed in my dissertation titled Engineering systematic musicology: methods and services for computational and empirical music research (version of record). The dissertation bundles several of my publications and places them in a framework in the introduction and reflects upon these in the conclusion. The publications all contribute either directly to the field of systematic musicology (e.g. tone scale research) or contributes indirectly by facilitating specific research tasks (e.g. synchronization of multi-modal research data).

The presentation during my defense was meant for a broader audience. During the presentation I gave examples of the research topics I have been working and focused on how these are connected. The presentation titled Engineering systematic musicology can be seen by following the previous link and is included below. The slide with the live spectrogram and the slide with the map need to be started by double clicking otherwise they remain empty.

The presentation is essentially an interactive HTML5 website build with the reveal.js framework. This has the advantage that multimedia is well supported and all kinds of interactions can be scripted. The presentation above, for example, uses the web audio API for live audio visualization and the google maps API for interactive maps. Video integration is also seamless. It would be a struggle to achieve similar multi-media heavy presentations with other presentation software packages such as Impress, Keynote or Powerpoint.


~ HTML5 spectrogram on canvas with pitch estimation

To present my research in an accessible way I needed a reliable way to visualize audio, audio feature extraction and processing of audio features into a higher level representation. Canvas, HTML5, javascript and the Reveal.js presentation framework offered a solution.

I often need audio and video material embedded into presentations. I have had bad experiences with powerpoint/keynote and especially the LaTeX beamer package and multimedia: audio/video material does not start playing or at the wrong moment, finicky on codecs, limited compatibility, a clunky UX (whoever came up with the idea to show multimedia controls while hovering over e.g. an audio thumbnail should be reoriented towards back-end programming) all contribute to errors while handling audio/video. Moreover the interactive capabilities are limiting.

The component above is an interactive spectrogram which combines HTML5’s web audio API capabilities with the canvas element and some Javascript to glue things together. Note that this has been tested on Chrome and Firefox only.

To experiment with the capabilities you can either drag and drop mp3 files or analyse live audio from your microphone

This is based on the spectrogram implementation by GitHub user Boris Smus. The live pitch tracking is implemented by Peter Hayes which again is based on my own Java code.