Check out the beta version of the new IRCAM Forum ⇢ https://beta.forum.ircam.fr

ProductView more

Factorsynth

Machine learning for sound deconstruction in Ableton Live

Factorsynth is a Max For Live device created by independent researcher and developer J.J. Burred. It uses a machine learning technique (matrix factorization) to decompose any input sound into a set of temporal and spectral elements. By rearranging and modifying these elements you can do powerful transformations to your clips, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns, remixing loops in real time, creating complex sound textures…

Factorization usually takes a few seconds, and can be performed while the Live set is playing. Once the factorization is ready, you can modify your sound in real time by editing and recombining the extracted elements. You can additionally load a second sound and perform an advanced form of cross-synthesis between elements of the first sound (called the master sound) and elements of the second sound (called the x-syn sound). The generated components can be individually exported as wave files for further processing. Aimed at electronic music composers, sound designers and live performers, Factorsynth opens up a whole new range of possibilities for studio or stage.

System Requirements

  • Mac OS or Windows
    Ableton Live 9 or 10
    Max For Live 7 or 8

https://www.youtube.com/watch?v=3LodkAEPEY4

The Factorsynth project

J.J Burred started the Factorsynth project in 2014, first doing research on creative applications of a data analysis method called matrix factorization (hence the name). I have since then released several prototype versions for command line and plain Max. These old versions were not real-time capable but have been used by several composers of electronic and electroacoustic music for detailed sound editing and spatialization. Here are some recent works that have used Factorsynth: