DECENTER - Spatial Devices Pack

This project investigates themes and techniques related to algorithmic and spatial music in the realtime domain. 
It aims to enable a musical performative approach and forms of sound expressions through the development of techniques of real-time spatial sound orchestration in a multichannel environment. One of the main concepts of the project is to provide a set of tools to develop instruments that include fine control of spatialisation in the synthesis process, as opposed to the traditional approach that applies spatial techniques to already composed sound.

This project is an extract of my work built during a research periood between the GRAIM studio at the Conservatory of Vicenza, Spaes Lab Berlin and MONOM Berlin.
In my master thesis "From a compositional process to an interaction paradigm" I wanted to describe my musical activity and its relationship with the digital domain, summarizing my matrix of thought on the relationship between composition and digital lutherie, passing through the relationship between instrument and performer; I focus my attention on the materials, methods and processes adopted in the compositional phase.
I have implemented my archetypes in various applied research projects, between musical composition and creative coding, documenting the processes and results. This demonstrates that today, engineering, composition and execution skills can converge in a single figure thanks to the release of resistances in the digital domain.
I therefore wanted to devote some reflections to the human-digital link, launching ideas on how this link addresses issues of accessibility, inclusion, decentralization, towards a change of cultural paradigm. These thoughts are part of my vision of how the composer's evolution influences itself in relation to the digital medium, bringing out new open musical systems and hybrid disciplines, towards alternative methods of interaction and perception.

Previous
Next

work samples

I’m interested in analysing and improving the relationship between real-time sound spatialisation
and digital technology. This project is a research, development, and experimentation in spatial sound
technology which opens up new possibilities to design, perform and experience sound spatially.
I’m developing a series of devices to generate complex spatial movements on audio rate in order to control, in real-time, the position of sounds in a performing space through a custom multichannel surround system.
That’s enable a creative practice of space and movement in music. The propose is a flexible, intuitive control platform for artists from different performative disciplines to experiment and express their ideas with a new dimensionality.
Immersive audio is a revolutionary paradigm of multi-dimensional sound that fully envelops the listener and allows for a more spatialized experience of audio for music, cinema, virtual reality, video games, museums and other fields of exploration in storytelling.

What are the key questions within this research ?


What does it mean, musically speaking, to compose sound in space?

How the environment and sound’s spatial distribution effect our listening/perception?

Do sonic spaces’ new conceptions develop a different musical compositional approach and new forms of sound expressions?

Does a redistribution of sound in space enable an augmented perceptive dimension and deeper interpretation of time and space?
Should the consideration of spatial parameters be integrated into the compositional process and considered valid components of structural organisation?

How can we design new instruments that take in consideration the spatial distribution of sound?
What are the correlations between sound spatialisation and sound synthesis?

How technology can support creativity?
Which possibilities do musical research and technology offer the composer to explore space and those influences that they could mean on musical language?

A spatial instrument is a tool that has the ability to manipulate the spatial dimension of the sound produced at the same time as its ability to produce or manipulate the other dimensions of sound. The sound generation and the spatialization of the sound will be correlated, synchronized. The core engine is composed by three autonomous but interconnected nodes: sound processing, coordinate generation, multi-channel spatial decoding. This powerful integration allows the performer to use a single device to create both music content and its multichannel spatialization in real time, proposing a new interaction paradigm and integrating spatial parameters into the compositional process.

“While sound spatialisation techniques have existed for half a century, the proliferation of technology to spatialise sound is a recent phenomenon. Many recent spatialised musical pieces heard in concerts seem to treat the spatialisation as an effect or afterthought that is applied after the piece is composed. Perhaps this is symptomatic of a proliferation of the required technology. However, many composers working in the field of spatialised music state firmly that an understanding of the function of spatial parameters, in semantic terms, is crucial to successful composition. There is a strong sense that the question of what it means, musically speaking, to compose sound in space is of great importance, but it is not clearly articulated.”
Integrating Spatial Parameters in Composition Practice by Paul Doornbusch Peter McIlwain