Feedback from Brice Gatinet, composer in residence at IRCAM (Paris)

Variational Audio Encoder possesses a considerable potential to reveal creative types of musical materials hidden in a specific dataset. How sounds interact with each other in this type of environment brings uncommon particularities and opportunities. Despite the fact I use extremely basic interaction with the models (mostly midi data), I was really interested by the difference between the sound results and the gestures I use to produce these sounds. These gesture/sound interactions were also specific to a particular dataset. This feature is likely the most powerful add-on to the massive amount of sound processing tools. The composer can define a sound environment through the dataset selection, but the resulting model will be unique : the same gesture on a specific dataset will produce various results. This physicality forces a new working path than working with a more common synthesizer where, most of the time, we can define more clearly these gestural interactions in order to develop some habits. At the end, the composition process reminds me of the early precepts of acousmatic music where the goal of the composer is primarily to hear and develop a form through listening rather than develop musical ideas with a predetermined composition system. The piece …et… lisse. is based on four datasets and three sound materials. All sounds come from the models describe in this paper.

Sound material (Sinus)


Sound material (Percussions)


Original piece

…et… lisse