The NAISA Spatialization system
Darren Copeland, Artistic Director
New Adventures in Sound Art (NAISA)
The NAISA Spatialization system has been in use since 2006 for performances produced by New Adventures in Sound Art and in individual artistic projects by its Artistic Director Darren Copeland. From 1998 to 2006, the Richmond Sound Design Audiobox and its control software ABControl (made by Chris Rolfe of Third Monk Software) was utilized quite effectively for automated spatialization control among 8 loudspeakers.
The NAISA Spatialization system is an interactive performance system for the spatial movement of amplified sounds among one to three octophonic (or quadraphonic) loudspeaker rings.
For physical real-time control the system requires 1 to 3 live performers.
A performer located in a central mix position in the hall wears a Polhemus Patriot 6D sensor attached to his or her hand for gestural control of the spit~ spatialization software.
There are also two additional performers located on stage or in the periphery (aisle ways or upper galleries) that operate Audio Spotlight directional speakers for manual movement of high frequency beams of directional sound.
Together these elements create a dynamic and very physical experience of sound spatialization that enhances the performative quality of performance spatialization. This document provides a general technical overview of the NAISA Spatialization system and its components.
Spit~ Spatialization Software
The spit~ software designed by Benjamin Thigpen in Max/MSP is an adaptation of the spatialization model of the Spat system developed at IRCAM (or Spatializer). The spit~ uses the distance and azimuth parameters of the Spat but without the use of reverberation in its simulation of distance. Rings of concentric circles provide a visual interface for the user to judge how much volume and high frequency roll off he/she is adding with the distance parameter. The spit~ also uses Doppler like the Spat to lend a life-like quality to movements crossing greater distances. However, this does alter pitch intonation so users often incorporate it with discretion and care.
The basic parameters of each spit~ module include the following:
- Azimuth – directional position in a 360 circle.
- Distance – the degree of volume attenuation and high frequency roll off.
- Elevation – the mix between different rings of speakers for vertical movement.
- Doppler – a ratio sets the amount of Doppler.
- Offset – the amount of degrees that the right channel is offset from the left control channel.
- Link – similar to offset, but this allows multiple spit~ modules to be linked so that the user can move a quad or octophonic source and maintain the same offset relationship between the source channels.
All of the parameters can be controlled manually with mouse/keyboard entry or with their own internal algorithmic controls (random, drunk, etc) or by mapping them to the six parameters of the Polhemus Patriot controller. The values for the algorithmic controls can also be controlled by the Patriot or a nanoKONTROL fader controller which is explained below.
Other features of the Spit~ spatialization software
Channel output of the spatialization assumes a fixed speaker plot chosen by the user that is either Quadraphonic, Double Diamond (0, 45, 90, 135, 180, 225, 270, 315) or 4 Pairs (Double Diamond offset by 30 degrees).
The Spit~ software also contains an audio routing matrix for connecting 8 stereo soundfile players and 16 channels of external input to the 4 stereo spit~ modules as well as directly to 24 speaker outputs and 8 auxiliary outputs.
The Spit~ has a preset storage system for quick recall of Controller, Audio Routing, Spit~ and other settings. These presets can be recalled by pressing keys on the laptop keyboard so that a user can transition quickly through different presets during a performance.
The Polhemus Patriot 6D sensor is 6D because there are six degrees of freedom or tracking information that is sent from a wired sensor to its magnetic sensing base and finally to the computer in the form of serial data. The controller was designed for pilot training but has been an effective tracking tool for many purposes including motion tracking for computer animation.
The Patriot controller has made the initial learning curve quite easy for newcomers to the NAISA Spatialization system provided they are supported by a technician knowledgeable of the spit~ software. However, the degree of advancement into finer control is quite high, so multiple rehearsals are critical for satisfactory public performances.
NAISA has found that the Patriot is best used when positioned on the top part of the left hand of the user. However, the sensor does not assume any particular physical placement or treatment within its zone of detection.
The Patriot has six parameters:
- X – tracks the spatial positioning of the hand in a forward-backward direction.
- Y – tracks the spatial positioning of the hand in a left-right direction.
- Z – tracks the spatial positioning of the hand in an up-down direction.
- Tilt – tracks the gestural tracking of the wrist tilting in an up-down direction.
- Roll – tracks the gestural tracking of the wrist rolling in a lateral direction.
- Azimuth – tracks the direction the sensor is pointing.
The six parameters of the controller can be assigned to any of the parameters of the spit~.
Hints and Suggestions for Patriot Controller
The mapping of X, Y and Z to the X, Y and Elevation parameters of the the spit~ is a very clear and effective starting point for mapping gestural control to manual placement of a sound in space and has been the backbone of many spatialization mappings using the Patriot.
Articulated gestural parameters like tilt, roll or azimuth work very well when assigned to control spit~ parameters like controlling the azimuth offset between stereo channels and especially for controlling the speed or arc size for automated features like random, drunk and circle.
Together the gestural parameters can be used with the spatial ones in order to control multiple spit~ modules at the same time or to use automated and manual control in tandem on the same spit~ module. However, there is a physical limitation to assigning multiple gestural parameters at once, because the physical movement required to control one parameter may influence the output of the other.
Mappings are also possible between the spit~ and the Korg nanoKONTROL. This is helpful for parameters that are best controlled with an on/off switch or that benefit more from a fader interface rather than a gestural one.
There are also mappings available for the Behringer BCF2000 fader controller to control input and output levels. There is a separate routing matrix for the fader controller which allows for instant recall of different fader configurations in a performance. By using the fader controller with just the soundfile players and the audio routing matrix it is possible for this system to function as a conventional fader diffusion system. However, it is also interesting to add a performer using faders among the three other performers.
The Audio Spotlights
The addition of the Audio Spotlights has been in development since 2011 and offers a number of performance challenges that can not always be met in every circumstance. On the one hand, they require rehearsal. This is not just to develop a familiarity of the system and the audio content, but to also develop communication strategies and physical choreography among performers. The spotlights are best used in venues that have some reflective surfaces and that are not too small. The system can be used without the Audio Spotlights, but at the same time, they offer something that is truly unique and special.
The Audio Spotlight is a directional loudspeaker, which means that it projects sound in a very narrow 5 degree beam that can maintain its amplitude intensity (and this narrow beam) over distances of 200 feet or more, which is quite the opposite from conventional loudspeakers. The Audio Spotlight emits ultrasonic frequencies that when they interact with the atmosphere produce treble frequencies in the audible range for humans. Frames with handles have been added by Chris Clifford to the Audio Spotlights so that a performer may hold the speaker in his or her hands in order to manually direct the movement of sound in space.
The Audio Spotlight is very effective at creating the illusions of a sound being located in a very precise location on a wall or floor or any flat surface. However, at the same time in a very reflective environment like an art gallery with parallel flat surfaces the sound beam from the Audio Spotlight will bounce and reflect off of multiple surfaces. This will create different illusions of sound localization for different listeners in the audience. Also, the Audio Spotlight may not be audible to every listener in the audience at the same time, which is challenging for the way concert spatialization is conventionally approached. As an interesting contrast to that characteristic is the fact that the Audio Spotlight can produce the impression that the sound is coming from inside the listener’s head when it is pointed directly at a listener who is more than six feet from it. Furthermore, the ultrasonic output of the Spotlight makes it impossible for the performer to hear the sound coming from the speaker he or she is operating until it bounces off of a surface. These idiosyncratic qualities make for some unusual challenges that may or may not be suitable for every piece using the NAISA Spatialization system, so therefore, they are not used in every instance.
The NAISA Spatialization system offers artists a very dynamic, responsive, sensitive instrument for moving sounds among multiple loudspeakers in a public performance. A number of features make it quite unprecedented in the history of multichannel spatialization, particularly the use of multiple performers which runs counter to the solo nature of other spatialization practices. Also, the system provides audiences with a visual and physical correlation to their auditory experience of sound moving in space. The current tendency in fixed media presentations to perform in the dark or with as minimal lighting as possible certainly is counter to the inherent theatricality of concert music performance that this system utilizes.
Documentation and Further Reading
For video explanation of the entire Spatialization system, go to the NAISAtube web channel.
There is some in-performance video footage captured by Ricardo Clementi at the Mantis Festival – the University of Manchester.
For a longer discussion and other video and photo documentation about the Audio Spotlights, please refer to the article The Audio Spotlight in Electroacoustic Performance Spatialization by Darren Copeland, which is published in eContact 14.4.
Audio Spotlight brochure. “Audio Spotlight: directional sound system.” Holosonics, Inc.
Copeland, Darren. Towards a Wider Immersion in Sound Art. Art of Immersive Soundscapes. Edited by Ellen Waterman and Pauline Minevich. Regina: University of Regina Press, 2013, pp. 179-195.
Polhemus brochure. “Patriot.” Polhemus website.
Rolfe, Chris. A Practical Guide To Diffusion. eContact! 2.4 (September 1999). Canadian Electroacoustic Community.
Thigpen, Benjamin. Personal website.