Motion-Based Virtual Instrument Controller
Computer Vision: Virtual Ensemble Controller
The CV Virtual Ensemble Controller, uses multiple sensor technologies to generate, process, and visualize incoming motion-based data. It is a performance-driven system that initially uses transformative response methods to broaden the sonic possibilities of an acoustically modeled virtual ensemble, where three separate sensor technologies are assigned distinct functions throughout the flow (or order) of audio signal processing. The Camera Vision techniques, namely Blob Tracking, Target Sensing, and Centroid Tracking, are designated as sound generators. The data generated from these camera technologies is used to decouple timbral outcomes from the gestures and material dimensions inherent to producing sound on physically modeled, digital instruments. The audio from these virtual instruments is then processed through a Modular Feedback/Delay Looping System, the parameters of which are controlled by the incoming motion tracking data mapped from the wearable MUGIC® sensor.