Publications

Partial lists of my my publications can be found in the research information system of HoGent and UGent. A list of my publications is also available on Google Scholar. Below a more complete list can be found.

Journal Articles

Adopting a music-to-heart rate alignment strategy to measure the impact of music and music tempo on human heart rate
Edith Van Dyck, Joren Six , Esin Soyer, Marlies Denys, Ilka Bardijn, and Marc Leman
PDF – Author version | Version of Record | BibTeX

Acoustical properties in Inhaling Singing: a case-study
Françoise Vanhecke, Mieke Moerman, Frank Desmet, Joren Six, Kristin Daemers, Godfried-Willem Raes, Marc Leman
(2017) Physics in Medicine
Version of Record | BibTeX

Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment
Joren Six and Marc Leman
(2015) Journal on Multimodal User Interfaces
PDF – Author version | Version of Record | BibTeX

Tarsos, a modular platform for precise pitch analysis of western and non-western music
Joren Six, Olmo Cornelis and Marc Leman
(2013) Journal of New Music Research. 42(2). p.113-129
PDF – Author version | Version of Record | BibTeX

Evaluation and Recommendation of Pulse and Tempo Annotation in Ethnic Music
Olmo Cornelis, Joren Six, Andre Holzapfel, and Marc Leman
(2013) Journal of New Music Research. 42(2). p.131-149
PDF – Author version | Version of Record | BibTeX

Papers and Abstracts in Peer Reviewed Conference Proceedings

Regularity and asynchrony when tapping to tactile, auditory and combined pulses
Joren Six, Laura Arens, Hade Demoor, Thomas Kint and Marc Leman
(2017) Proceedings of the ESCOM conference
Author version | Version of Record | BibTeX

Multimodal analysis of synchronization data from patients with dementia
Frank Desmet , Micheline Lesaffre, Joren Six, Nathalie Ehrlé, Séverine Samson
(2017) Proceedings of the ESCOM conference
Author version | Version of Record | BibTeX

A framework to provide fine-grained time-dependent context for active listening experiences
Joren Six and Marc Leman
(2017) Proceedings of AES Conference on Semantic Audio 2017
Author version | Version of Record | BibTeX

Music and Movement Synchronization in People with Dementia
Matthieu Ghilain, Loris Schiaratura, Micheline Lesaffre, Joren Six, Frank Desmet, Séverine Samson
Conference website | PDF

The relaxing effect of tempo on music-aroused heart rate
Edith Van Dyck, Joren Six
(2016) Proceedings of the 14th International Conference for Music Perception and Cognition (ICMPC 14)
PDF | BibTeX

The Deep History of Music Project
Armand Leroi, Matthias Mauch, Pat Savage, Emmanoul Benetos, Juan Bello, Maria Pantelli, Joren Six, Tillman Weyde
(2015) Proceedings of the 5th International Folk Music Analysis Workshop (FMA 2015)
PDF | BibTeX

Panako – A Scalable Acoustic Fingerprinting System Handling Time-Scale and Pitch Modification
Joren Six and Marc Leman
(2014) Proceedings of the 15th ISMIR Conference (ISMIR 2014)
Author version | Version of Record | BibTeX

TarsosDSP, a Real-Time Audio Processing Framework in Java
Joren Six, Olmo Cornelis and Marc Leman
(2014) Proceedings of the 53rd AES Conference (AES 53rd)
Author version | Version of Record | BibTeX

Computer Assisted Transcription of Ethnic Music
Joren Six and Olmo Cornelis
(2013) Proceedings of the 2013 Folk Music Analysis Conference (FMA 2013)
PDF | BibTeX

Revealing and Listening to Scales from the Past; Tone Scale Analysis of Archived Central-African Music Using Computational Means
Olmo Cornelis and Joren Six
(2012) Proceedings of the 2012 Conference for Interdisciplinary Musicology (CIM 2012)
PDF | BibTeX

Sound to scale to sound, a setup for microtonal exploration and composition
Olmo Cornelis and Joren Six
(2012) Proceedings of the International Computer Music Conference (ICMC 2012)
PDF | BibTeX

A Robust Audio Fingerprinter Based on Pitch Class Histograms: Applications for Ethnic Music Archives
Joren Six and Olmo Cornelis
(2012) Proceedings of the International Workshop of Folk Music Analysis (FMA 2012)
PDF | BibTeX

Towards the Tangible: Mircotonal Scale Exploration in Central-African Music
Olmo Cornelis and Joren Six
(2012) Proceedings Analytical Aproaches to World Music Conference (AAWM 2012)
PDF | BibTeX

Tarsos – a Platform to Explore Pitch Scales in Non-Western and Western Music
Joren Six and Olmo Cornelis
(2011) Proceedings of the 12th International Symposium on Music Information Retrieval (ISMIR 2011)
PDF | BibTeX

Peachnote Piano: Making MIDI instruments social and smart using Arduino, Android and Node.js
Joren Six, Vladimir Viro
(2011) Demo Sessions of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011)
PDF | BibTeX

Master’s Thesis

Collaborative Filtering: Onderzoek & implementatie
Greet Dolvelde, Joren Six
(2008) Master’s Thesis
PDF | BibTeX

Presentations, Discussions Guest Lectures, by Invitation

Panel discussion, 2012: Technological challenges for the computational modelling of the world’s musical heritage, Folk Music Analysis Conference 2012 – FMA 2012, organizers: Polina Proutskova and Emilia Gomez, Seville, Spain

Guest lecture, 2012: Non-western music and digital humanities, for: “Studies in Western Music History: Quantitative and Computational Approaches to Music History”, M.I.T., Boston, U.S.

Guest lecture, 2011: Presenting Tarsos, a software platform for pitch analysis. At: Electrical and Electronics Eng.Dept. IYTE, Izmir, Turkey

Workshop 2017:Computational Ethnomusicology – Methodologies for a new field Leiden, The Netherlands

Experience as Lecturer

A002301 (2016-2017) “Grondslagen van de muzikale acoustica en sonologie” – Theory and Practice sessions together with dr. Pieter-Jan Maes

Other Output

See the software page



~ ESCOM 2017 - Regularity and asynchrony when tapping to tactile, auditory and combined pulses

ESCOM 2017 LogoThe 25th anniversary edition of the ESCOM 2017 Conference conference was organised in August 2017 by the IPEM research group from Ghent University. ESCOM is the conference of the European Society for the Cognitive Sciences of Music had two contributions to the conference.

The first was a collaboration with Frank Desmet, Micheline Lesaffre, Nathalie Ehrlé and Séverine Samson. The contribution is titled Multimodal Analysis of Synchronization Data from Patients with Dementia. It details a famework to analyze data in an experiment for patients with dementia.

For the second contribution I was the main researcher. It is the result of a project with students of the systematic musicology course at Ghent University (Laura Arens, Hade Demoor, Thomas Kint) . The contribution is called Regularity and asynchrony when tapping to tactile, auditory and combined pulses

The presentation details a multi sensory tapping task with the aim to develop an assistive technology for dancers.

  • ESCOM 2017 presentation

    ESCOM 2017 presentation


~ Real-time signal synchronization with acoustic fingerprinting - A Master's Thesis By Ward Van Assche

During the last semester Ward wrote a Masters thesis titled Real-time signal synchronization with acoustic fingerprinting. For his thesis Marleen Denert and I served both as promoter.

The aim of the thesis was to design and develop a system to automatically synchronize streams of incoming sensor data in real-time. Ward followed up on an idea that was described in an article called Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment. The extended abstract can be consulted. The remainder of the thesis is in Dutch.

For the thesis Ward developed a Max/MSP object to read data from sensors together with audio. Also provided by Ward is an object to synchronize audio and data in real-time. The objects are depicted above.


~ Connecting Musical Modules - Musical Hardware and Software Interfaces

Axoloti logo I have given a presentation at the the Newline conference, a yearly event organized by the Hackerspace Ghent. It was about:

“In this talk I will give a practical overview on how to connect hard- and software components for musical applications. Next to an overview there will be demos! Do you want to make a musical instrument using a light sensor? Use your smartphone as an input device for a synth? Or are you simply interested in simple low-latency communication between devices? Come to this talk! More concretely the talk will feature the Axoloti audio board, Teensy micro-controller with audio board, MIDI and OSC protocols, Android MIDI features and some sensors.”

During the presentation the hard and software components were demonstrated. More concretely an introduction was given to the following:

The presentation about DIY musical modules can be downloaded here.


~ Lecture on MIR - Tone Scale Extraction - Acoustic Fingerprinting

This morning, the 30th of October 2015, I gave a lecture on Music Information Retrieval in general and two MIR-tasks in particular. The two more detailed tasks were tone scale analysis and acoustic fingerprinting.

A slide

During the lecture some live demonstrations were done with Panako and Tarsos. Also some examples from TarsosDSP were used. Excerpts of the music used is available here, this is especially interesting if you want to repeat the demos. Sonic visualizer, Music21 and MuseScore were also mentioned during the lecture.

The presentation about Music Information Retrieval and the handouts can be found here als well.


~ Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment - In Journal on Multimodal User Interfaces

The article titled “Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment” by Joren Six and Marc Leman has been accepted for publication in the Journal on Multimodal User Interfaces. The article will be published later this year. It describes and tests a method to synchronize data-streams. Below you can find the abstract, pointers to the software under discussion and an author version of the article itself.

Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment
An Application of Acoustic Fingerprinting to Facilitate Music Interaction Research

Abstract: Research on the interaction between movement and music often involves analysis of multi-track audio, video streams and sensor data. To facilitate such research a framework is presented here that allows synchronization of multimodal data. A low cost approach is proposed to synchronize streams by embedding ambient audio into each data-stream. This effectively reduces the synchronization problem to audio-to-audio alignment. As a part of the framework a robust, computationally efficient audio-to-audio alignment algorithm is presented for reliable synchronization of embedded audio streams of varying quality. The algorithm uses audio fingerprinting techniques to measure offsets. It also identifies drift and dropped samples, which makes it possible to find a synchronization solution under such circumstances as well. The framework is evaluated with synthetic signals and a case study, showing millisecond accurate synchronization.

To read the article, consult the author version of Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment. The data-set used in the case study is available here. It contains a recording of balanceboard data, accelerometers, and two webcams that needs to be synchronized. The final publication is available at Springer via 10.1007/s12193-015-0196-1

The algorithm under discussion is included in Panako an audio fingerprinting system but is also available for download here. The SyncSink application has been packaged separately for ease of use.

To use the application start it with double click the downloaded SyncSink JAR-file. Subsequently add various audio or video files using drag and drop. If the same audio is found in the various media files a time-box plot appears, as in the screenshot below. To add corresponding data-files click one of the boxes on the timeline and choose a data file that is synchronized with the audio. The data-file should be a CSV-file. The separator should be ‘,’ and the first column should contain a time-stamp in fractional seconds. After pressing Sync a new CSV-file is created with the first column containing correctly shifted time stamps. If this is done for multiple files, a synchronized sensor-stream is created. Also, ffmpeg commands to synchronize the media files themselves are printed to the command line.

This work was supported by funding by a Methusalem grant from the Flemish Government, Belgium. Special thanks goes to Ivan Schepers for building the balance boards used in the case study. If you want to cite the article, use the following BiBTeX:

@article{six2015multimodal,
  author      = {Joren Six and Marc Leman},
  title       = {{Synchronizing Multimodal Recordings Using Audio-To-Audio Alignment}},
  issn        = {1783-7677},
  volume      = {9},
  number      = {3},
  pages       = {223-229},
  doi         = {10.1007/s12193-015-0196-1},
  journal     = {{Journal of Multimodal User Interfaces}}, 
  publisher   = {Springer Berlin Heidelberg},
  year        = 2015
}
  • The synchronized data from the two webcams, accelerometer and balanceboard in ELAN. From top to bottom the synchronized streams are two video-streams, balance-board data (red), accelerometer-data (green) and audio (black).

    The synchronized data from the two webcams, accelerometer and balanceboard in ELAN. From top to bottom the synchronized streams are two video-streams, balance-board data (red), accelerometer-data (green) and audio (black).

  • Multimodal recording system diagram. Each webcam has a microphone and is connected to the pc via USB. The dashed arrows represent analog signals. The balance board has four analog sensors but these are simplified to one connection in the schematic. The analog output of the microphones is also recorded through the DAQ. An analog accelerometer is connected with a microcontroller which also records audio.

    Multimodal recording system diagram. Each webcam has a microphone and is connected to the pc via USB. The dashed arrows represent analog signals. The balance board has four analog sensors but these are simplified to one connection in the schematic. The analog output of the microphones is also recorded through the DAQ. An analog accelerometer is connected with a microcontroller which also records audio.

  • Two streams of audio with fingerprints marked. Some fingerprints are present in both streams (green, O) while others are not (red, x). Matching fingerprints have the same offset, indicated by the dotted lines.

    Two streams of audio with fingerprints marked. Some fingerprints are present in both streams (green, O) while others are not (red, x). Matching fingerprints have the same offset, indicated by the dotted lines.

  • Synchronized streams in Sonic Visualizer. Here you can see two channel audio synchronized with accelerometer data (top, green) and balanceboard data (bottom, purple).

    Synchronized streams in Sonic Visualizer. Here you can see two channel audio synchronized with accelerometer data (top, green) and balanceboard data (bottom, purple).

  • Conceptual drawing used as a basis for the SyncSync application. A reference stream (blue) can be synchronized with streams one and two. It allows a workflow where streams are started and stopped (red) or start before the reference stream (green).

    Conceptual drawing used as a basis for the SyncSync application. A reference stream (blue) can be synchronized with streams one and two. It allows a workflow where streams are started and stopped (red) or start before the reference stream (green).

  • A microcontroller fitted with an electret microphone and a microSD card slot. It can record audio in real-time together with sensor data.

    A microcontroller fitted with an electret microphone and a microSD card slot. It can record audio in real-time together with sensor data.

  • SyncSink Synchronize media files. A user-friendly interface to synchronize media and data files.  First a reference media-file is added using drag-and-drop. The audio steam of the reference is extracted and plotted on a timeline as the topmost box. Subsequently other media-files are added. The offsets with respect to the reference are calculated and plotted. CSV-files with timestamps and data recorded in sync with a stream can be attached to a respective audio stream. Finally, after pressing Sync!, the data and media files are modified to be exactly in sync with the reference.

    SyncSink Synchronize media files. A user-friendly interface to synchronize media and data files. First a reference media-file is added using drag-and-drop. The audio steam of the reference is extracted and plotted on a timeline as the topmost box. Subsequently other media-files are added. The offsets with respect to the reference are calculated and plotted. CSV-files with timestamps and data recorded in sync with a stream can be attached to a respective audio stream. Finally, after pressing Sync!, the data and media files are modified to be exactly in sync with the reference.


~ Audio Fingerprinting - Opportunities for digital musicology

The 27th of November, 2014 a lecture on audio fingerprinting and its applications for digital musicology will be given at IPEM. The lecture introduces audio fingerprinting, explains an audio fingerprinting technique and then goes on to explain how such algorithm offers opportunities for large scale digital musicological applications. Here you can download the slides about audio fingerprinting and its opportunities for digital musicology.

With the explained audio fingerprinting technique a specific form of very reliable musical structure analysis can be done. Below, in the figure section, an example of repetitive structure in the song Ribs Out is shown. Another example is comparing edits or versions of songs. Below, also in the figure section, the radio edit of Daft Punk’s Get Lucky is compared with the original version. Audio synchronization using fingerprinting is another application that is actively used in the field of digital musicology to align audio with extracted features.

Since acoustic fingerprinting makes structure analysis very efficiently it can be applied on a large scale (20k songs). The figure below shows that identical repetition is something that has been used more and more since the mid 1970’s. The trend probably aligns with the amount of technical knowledge needed to ‘copy and paste’ a snippet of music.

How much identical repetition is used in music, over the years

Fig: How much identical repetition is used in music, over the years.

The Panako audio fingerprinting system was used to generate data for these case studies. The lecture and this post are partly inspired by a blog post by Paul Brossier.

  • Spectral peak Acoustic fingerprinting system

    Spectral peak Acoustic fingerprinting system

  • How much identical repetition is used in a set of 20k songs.

    How much identical repetition is used in a set of 20k songs.

  • Radio edit vs. original of Daft Punk's Get Lucky

    Radio edit vs. original of Daft Punk's Get Lucky

  • Structure in Ribs Out

    Structure in Ribs Out


~ ISMIR 2014 - Panako - A Scalable Acoustic Fingerprinting System Handling Time-Scale and Pitch Modification

Panako poster At ISMIR 2014 i will present a paper on a fingerprinting system. ISMIR is the annual conference of the International Society for Music Information Retrieval is the world’s leading interdisciplinary forum on accessing, analyzing, and organizing digital music of all sorts. This years instalment takes place in Taipei, Taiwan. My contribution is a paper titled Panako – A Scalable Acoustic Fingerprinting System Handling Time-Scale and Pitch Modification, it will be presented during a poster session the 27th of October.

This paper presents a scalable granular acoustic fingerprinting system. An acoustic fingerprinting system uses condensed representation of audio signals, acoustic fingerprints, to identify short audio fragments in large audio databases. A robust fingerprinting system generates similar fingerprints for perceptually similar audio signals. The system presented here is designed to handle time-scale and pitch modifications. The open source implementation of the system is called Panako and is evaluated on commodity hardware using a freely available reference database with fingerprints of over 30,000 songs. The results show that the system responds quickly and reliably on queries, while handling time-scale and pitch modifications of up to ten percent.

The system is also shown to handle GSM-compression, several audio effects and band-pass filtering. After a query, the system returns the start time in the reference audio and how much the query has been pitch-shifted or time-stretched with respect to the reference audio. The design of the system that offers this combination of features is the main contribution of this paper.

The system is available, together with documentation and information on how to reproduce the results from the ISMIR paper, on the Panako website. Also available for download is the Panako poster, Panako ISMIR paper and the Panako poster.

  • Results after time stretching

    Results after time stretching

  • Results after time scale modification

    Results after time scale modification

  • Results after pitch shifting

    Results after pitch shifting

  • Fingerprint and modifications

    Fingerprint and modifications

  • General fingerprinter

    General fingerprinter


~ TarsosDSP Paper and Presentation at AES 53rd International conference on Semantic Audio

TarsosDSP will be presented at the AES 53rd International conference on Semantic Audio in London . During the conference both a presentation and demonstration of the paper TarsosDSP, a Real-Time Audio Processing Framework in Java, by Joren Six, Olmo Cornelis and Marc Leman, in Proceedings of the 53rd AES Conference (AES 53rd), 2014. From their website:

Semantic Audio is concerned with content-based management of digital audio recordings. The rapid evolution of digital audio technologies, e.g. audio data compression and streaming, the availability of large audio libraries online and offline, and recent developments in content-based audio retrieval have significantly changed the way digital audio is created, processed, and consumed. New audio content can be produced at lower cost, while also large audio archives at libraries or record labels are opening to the public. Thus the sheer amount of available audio data grows more and more each day. Semantic analysis of audio resulting in high-level metadata descriptors such as musical chords and tempo, or the identification of speakers facilitate content-based management of audio recordings. Aside from audio retrieval and recommendation technologies, the semantics of audio signals are also becoming increasingly important, for instance, in object-based audio coding, as well as intelligent audio editing, and processing. Recent product releases already demonstrate this to a great extent, however, more innovative functionalities relying on semantic audio analysis and management are imminent. These functionalities may utilise, for instance, (informed) audio source separation, speaker segmentation and identification, structural music segmentation, or social and Semantic Web technologies, including ontologies and linked open data.

This conference will give a broad overview of the state of the art and address many of the new scientific disciplines involved in this still-emerging field. Our purpose is to continue fostering this line of interdisciplinary research. This is reflected by the wide variety of invited speakers presenting at the conference.

The paper presents TarsosDSP, a framework for real-time audio analysis and processing. Most libraries and frameworks offer either audio analysis and feature extraction or audio synthesis and processing. TarsosDSP is one of a only a few frameworks that offers both analysis, processing and feature extraction in real-time, a unique feature in the Java ecosystem. The framework contains practical audio processing algorithms, it can be extended easily, and has no external dependencies. Each algorithm is implemented as simple as possible thanks to a straightforward processing pipeline. TarsosDSP’s features include a resampling algorithm, onset detectors, a number of pitch estimation algorithms, a time stretch algorithm, a pitch shifting algorithm, and an algorithm to calculate the Constant-Q. The framework also allows simple audio synthesis, some audio effects, and several filters. The Open Source framework is a valuable contribution to the MIR-Community and ideal fit for interactive MIR-applications on Android. The full paper can be downloaded TarsosDSP, a Real-Time Audio Processing Framework in Java

A BibTeX entry for the paper can be found below.

1
2
3
4
5
6
@inproceedings{six2014tarsosdsp,
  author      = {Joren Six and Olmo Cornelis and Marc Leman},
  title       = {{TarsosDSP, a Real-Time Audio Processing Framework in Java}},
  booktitle   = {{Proceedings of the 53rd AES Conference (AES 53rd)}}, 
  year        =  2014
}
  • Samping

    Samping

  • AES53

    AES53

  • Constant-Q

    Constant-Q

  • Flanger

    Flanger

  • Pitch Shifting

    Pitch Shifting


~ Evaluation and Recommendation of Pulse and Tempo Annotation in Ethnic Music - In Journal Of New Music Research

The journal paper Evaluation and Recommendation of Pulse and Tempo Annotation in Ethnic Music – In Journal Of New Music Research by Cornelis, Six, Holzapfel and Leman was published in a special issue about Computational Ethnomusicology of the Journal of New Music Research on the 20th of august 2013. Below you can find the abstract for the article, and the full text author version of the article itself.

Abstract: Large digital archives of ethnic music require automatic tools to provide musical content descriptions. While various automatic approaches are available, they are to a wide extent developed for Western popular music. This paper aims to analyze how automated tempo estimation approaches perform in the context of Central-African music. To this end we collect human beat annotations for a set of musical fragments, and compare them with automatic beat tracking sequences. We first analyze the tempo estimations derived from annotations and beat tracking results. Then we examine an approach, based on mutual agreement between automatic and human annotations, to automate such analysis, which can serve to detect musical fragments with high tempo ambiguity.

To read the full text you can either download Evaluation and Recommendation of Pulse ant Tempo Annotation in Ethnic Music, Author version. Or obtain the published version of Evaluation and Recommendation of Pulse ant Tempo Annotation in Ethnic Music, published version

Below the BibTex entry for the article is embedded.

1
2
3
4
5
6
7
8
9
10
@article{cornelis2013tempo_jnmr,
  author = {Olmo Cornelis, Joren Six, Andre Holzapfel, and Marc Leman},
  title = {{Evaluation and Recommendation of Pulse ant Tempo Annotation in Ethnic Music}},
  journal = {{Journal of New Music Research}},
  volume = {42},
  number = {2},
  pages = {131-149},
  year = {2013},
  doi = {10.1080/09298215.2013.812123}
}

~ Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music - In Journal Of New Music Research

The journal paper Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music by Six, Cornelis, and Leman was published in a special issue about Computational Ethnomusicology of the Journal of New Music Research on the 20th of august 2013. Below you can find the abstract for the article, and pointers to audio examples, the Tarsos software, and the author version of the article itself.

Abstract: This paper presents Tarsos, a modular software platform used to extract and analyze pitch organization in music. With Tarsos pitch estimations are generated from an audio signal and those estimations are processed in order to form musicologically meaningful representations. Tarsos aims to offer a flexible system for pitch analysis through the combination of an interactive user interface, several pitch estimation algorithms, filtering options, immediate auditory feedback and data output modalities for every step. To study the most frequently used pitches, a fine-grained histogram that allows up to 1200 values per octave is constructed. This allows Tarsos to analyze deviations in Western music, or to analyze specific tone scales that differ from the 12 tone equal temperament, common in many non-Western musics. Tarsos has a graphical user interface or can be launched using an API – as a batch script. Therefore, it is fit for both the analysis of individual songs and the analysis of large music corpora. The interface allows several visual representations, and can indicate the scale of the piece under analysis. The extracted scale can be used immediately to tune a MIDI keyboard that can be played in the discovered scale. These features make Tarsos an interesting tool that can be used for musicological analysis, teaching and even artistic productions.

To read the full text you can either download Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music, Author version. Or obtain the published version of Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music, published version

Ladrang Kandamanyura (slendro pathet manyura), is the name of the piece used in the article throughout section 2. The album on which the piece can be found is available at wergo. Below a thirty second fragment is embedded. You can also download the thirty second fragment to analyse it yourself.

Below the BibTex entry for the article is embedded.

1
2
3
4
5
6
7
8
9
10
11
12
@article{six2013tarsos_jnmr,
  author = {Six, Joren and Cornelis, Olmo and Leman, Marc},
  title = {Tarsos, a Modular Platform for Precise Pitch Analysis 
            of Western and Non-Western Music},
  journal = {Journal of New Music Research},
  volume = {42},
  number = {2},
  pages = {113-129},
  year = {2013},
  doi = {10.1080/09298215.2013.797999},
 URL = {http://www.tandfonline.com/doi/abs/10.1080/09298215.2013.797999}
}

~ FMA 2013 - Computer Assisted Transcripton of Ethnic Music

At the third international workshop on Folk Music Analysis we presented a poster titled Computer Assisted Transcription of Ethnic Music]. The workshop took place in Amsterdam, Netherlands, June 6 and 7, 2013.

In the extended abstract, also titled Computer Assisted Transcription of Ethnic Music, it is described how the Tarsos software program now has features aiding transcription. Tarsos is especially practical for ethnic music of which the tone scale is not known beforehand. The proceedings of FMA 2013 are available as well.

Computer Assited Transcription of Ethnic Music poster

During the conference there also was an interesting panel on transcription. The following people participated: John Ashley Burgoyne, moderator (University of Amsterdam), Kofi Agawu (Princeton University), Dániel P. Biró (University of Victoria), Olmo Cornelis (University College Ghent, Belgium), Emilia Gómez (Universitat Pompeu Fabra, Barcelona), and Barbara Titus (Utrecht University). Some pictures can be found below.


Previous entries »