Interactive Program
The project aimed to combine heuristic audio analysis with real-time interactive machine learning to map live singing at different audio frequencies and trigger various audio effects. A core element was providing visual feedback to help singers correctly hit anticipated notes. The feedback would indicate whether the user was lower or higher than certain target pitches. The visual feedback is displayed in an internet browser using D3.js. The pipeline also utilizes software such as Max MSP for audio analysis, playback, and Flucoma’s MLP Regressor machine learning model.
ML Model and Max Patch: Noemie-San Dauphinais
D3 and Visual representation: Azmat Ishaq
Video :https://www.youtube.com/watch?v=dCbOYqfJzaI
April 2024
Process/Concept
I worked on the training the Machine learning algorithm of Flucoma in MaxMSP. I made the pipeline to connect from Max to the webpage.