The Music Scene(s)
Transforming real-time video data
into live generative music

Type
Generative Music,
A/V Installation
Technologies
Max, Ableton Live
Supported by 
Goldsmiths, University of London
musicscene01
Concept and background

This project is rooted in my ongoing exploration of how computational methods can extend and enhance the medium of film and video.

In my previous work in this field, I utilized borrowed footage to probe emotional resonance through sequencing and layering.

However, for this iteration, the focus has shifted towards the intrinsic
properties of digital video - its texture, light, and potential for real time manipulation.

Technical Implementation

A custom Max MSP patch orchestrates the core of the interaction, processing inputs from three strategically placed webcams. Each camera functions as a unique input, almost like a microphone.

Within the patch, tempos, MIDI notes, and musical structures are generated and sent to Ableton Live, where virtual instruments bring the sound to life.

Iteration 1: Still Life #1

Drawing inspiration from the qualities of traditional painting, "Still Life #1" harnesses the simplicity of static scenes imbued with subtle changes.

In this first setup the cameras capturvarious angles, echoing the practice of Paul Cézanne, who, over the course of days, would move his easel to paint different objects - or even the same one - from multiple perspectives. Each time, he painted what he observed.

Iteration 2: Landscape #1

Positioned strategically along the Thames River, the cameras capture the changing light over an industrial area as the sun sets.

Mirroring the river's subtle, persistent flow, the soundscape of "Landscape #1" offers a nuanced auditory experience that evolves almost imperceptibly. 

Digital designer and technologist based in London
For project enquiries:
antoniomdenuevo@gmail.com