RAPIDMIX project

RAPIDMIX stands for Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive Expressive Technology. It was a H2020 Innovation Action funded by the European Commision that ran from 2015 to 2018, involving three main academic research partners from the interactive music technology field : IRCAM's ISMM team (Paris, France), Universitat Pompeu Fabra's MTG team (Barcelona, Spain) and Goldsmiths University's EAVI group (London, England), as well as many industrial partners from the same technological field, including the well known creators of the Seaboard, ROLI, and the Reactable company.

RAPIDMIX was the opportunity for me to become a "real" developer by joigning the ISMM team led by Frederic Bevilacqua, and benefit from the whole team's great development experience, particularly in C++ and JavaScript. It also allowed me to collaborate with a number of "stars" in the field whose work I had been admiring for a while, notably people with an artistic and technological profile like Sergi Jordà and Atau Tanaka, among others.

The main goal of the project was quite ambitious : create a unified API for a bunch of software libraries developed over the years by the three academic partners in order to boost the R&D process of the industrial partners. Eventually, we came up with such an API, which is still living here.

My first mission was to develop proofs of concepts and prototypes based on the academic partners' libraries (including API prototypes for subsets of these libraries). Together with the other academic R&D teams, we went through several cycles of research and development, regularly testing our work by organizing and participating in events such as workshops and hackatons, and in the end I also participated in the elaboration of the final API.

Although the main quest was to finalize this official API, it doesn't integrate all the original candidate libraries, and many outcomes of the development process I was involved in still have a life of their own and are interesting enough to be cited hereafter.


MuBu is a huge Max/MSP package that serves different but related purposes such as audio analysis, audio synthesis, gesture recognition and data visualization. It was initiated by Norbert Schnell and is maintained by Riccardo Borghesi from the ISMM team. The eponyme underlying C library is mostly targeting the management and visualization of heterogeneous multidimensional signals of all kinds (sounds, gestures, audio descriptors, etc) and provides a bunch of sample based synthesis techniques through a few core objects, including concatenative granular synthesis based on Diemo Schwarz's research. The other objects of the package that perform audio analysis and gesture recognition rely on other libraries, some of which are cited below.

I created all my first RAPIDMIX prototypes using MuBu for Max, and along the way I ended up contributing to the package by improving its documentation, writing a significant number of tutorials, and adding a few utilities to its objects list. All these contributions are now part of the official distribution.


The C++ XMM library developed by Jules Françoise during his PhD thesis in the ISMM team is certainly the one I worked the most with, and which helped me reach a pretty good experience in bridging C++ code with all kinds of other languages. It is a machine learning library targeted at gesture recognition that features four statistical models : gaussian mixture models, hierarcical hidden markov models (of gaussian mixture model states), and their respective equivalents with regression ability. Jules had also ported his library to Max/MSP (included in the MuBu package) and Python. It is the natural and awesome successor of Frederic Bevilacqua's early work in the field, the Gesture Follower, and Baptiste Caramiaux's Gesture Variation Follower (GVF) library, also developed during a former PhD thesis in the ISMM team.

Early in the project I started working with JavaScript and NodeJS technologies, which I was able to learn on-the-fly thanks to my mentor Benjamin Matuszewski. It turns out that the former head of the ISMM team, Norbert Schnell, had recently made the shift to web technologies (he was the initiator of the first Web Audio Conference), and had developed the NodeJS soundworks framework, dedicated to creating collective musical experiences on mobile devices. So I started digging how to bring XMM into the JavaScript world, and came up with two libraries : xmm-node, a C++ addon for NodeJS, and xmm-client, actually just a rewrite of the decoding part of the library (no training of models from examples) in pure JavaScript.

Later on, some industrial partners of the project were interested in using XMM within the Unity environment, so I developed the xmm-unity plugin, and provided a bunch of basic examples to get started with it. I'm particularly proud that by doing this, I contributed to the creation of Reactable's Snap application.


CoMo, which stands for Collective Movements, is a JavaScript library that builds upon the soundworks framework and the xmm-node and xmm-client libraries. It is developed and maintained by Benjamin Matuszewski, and is still used extensively by the ISMM team.


PiPo stands for Plugin Interface for Processing Objects. It is a signal processing library initially developed by Norbert Schnell and maintained by Diemo Schwarz. It can work on signals of any dimension and rate and is mostly targeted at feature extraction of audio and gesture data. It offers a collection of signal processing modules and comes with a SDK that allows to extend this collection with new modules. It is mainly used as a Max/MSP library, included in the MuBu package.

I tried really hard to get it accepted as the signal processing framework of the RAPIDMIX API but I never succeeded, most probably because of its complexity which didn't suit the target audience of the API. Anyways, I was still able to successfully wrap parts of other signal processing libraries of the consortium into PiPo modules, and I really enjoyed writing the PiPoGraph class, my humble contribution to the PiPo SDK that allows to instantiate graphs of modules from simple descriptive strings, using a syntax very similar to the one of Faust.


This technology is a bit different from the others here, as we are not talking about a software library. The R-IoT is a tiny electronic device developed by Emmanuel Fléty, basically the guy behind all the hardware developments at IRCAM since many years. It is a WiFi motion sensor that embeds a high resolution (16 bit) 9 DOF sensor, and is based on a WiFi enabled chipset (TI CC3200) which is programmable via USB using Energia, a fork of the Arduino IDE dedicated to Texas Instruments chipsets. It runs on a small lithium battery, rechargeable via the same USB connector used to program it.

It turns out that this device was created on the initiative of the ISMM team who used it a lot in their projects, and was part of the proposed RAPIDMIX technologies. A great outcome of RAPIDMIX is that it is now produced by one of the industrial partners, PLUX, and is available as one of their Bitalino DIY electronic parts.

RAPIDMIX being a project about multimodal, interactive and expressive technologies, I worked a lot with motion sensors to process movements of the human body in real time. The R-IoT was at the heart of most prototypes, when it was not replaced by a smartphone used for its similar sensing abilities. I now own my own Bitalino R-IoT and am very happy with it.


Essentia is an audio feature extraction library similar to PiPo developed by the MTG, and is used by the backend of the Freesound API to provide advanced functionalities such as queries by similarity. I only developed a few prototypes based on these technologies during the project, and although Essentia and the Freesound API were not integrated into the final RAPIDMIX API, I had the conviction that Freesound had to be part of the global picture in order for the project to make sense.

I've been an enthusiastic Freesound member for a very long time. It has a large community of users who contribute with high quality audio content with permissive licenses on a regular basis, and I was really glad I could meet Frederic Font (the creator of the project) in person during a hackaton in Barcelona.

There is already an excellent freesound.js library that wraps all the functionalities of the Freesound API for JavaScript, but for my personal needs I wanted a simpler system that would only perform queries without all the Essentia stuff and not have to deal with the authentication part, providing a quick access to the high quality mp3 preview files that are usually good enough to play with in most web audio applications. So I came up with the simple-freesound library. It probably needs some updates as I haven't used it in a while, but some prototypes I made during my work on RAPIDMIX were based on it, and I will probably keep on using it in the future.