they reach more than a few milliseconds, perceived coherence of the source sound is unacceptably degraded. To reduce the error we use a compensation mechanism. We wait to apply control data until the largest buffer latency is reached. We thus achieve a lower interchannel error at the expense of increased latency. This result is better for performances, because constant delay is less disorienting to the performer than variable random delays.
Bandwidth is the amount of computation we can spend on each second, which determines how much oscillators we can resynthesize. In our system, we merely add more computers to achieve the desired bandwidth requirement.
From a technical standpoint, we could easily increase the number of oscillator banks synthesized on each computer. We only require 1 audio channel of output for each oscillator bank and several 8 channel DACs are currently on the market. Optimizing the synthesis routines is also necessary to achieve this performance improvement.
So far, Netmix has only been controlled by a single performer. It would be a more idiomatic use of the system to use multiple control sources, such as through geographically distributed performers or audience-driven multiple sensor input. In the case of distributed performance, it would be desirable to develop a method for realtime dissemination of the aggregate audio mix as it being produced (with the minimum possible latency).
Finally, the system could be expanded to embrace other methods of sound production. Likely candidates are granular synthesis, processing of stored samples, and processing of sounds recorded live in performance.
The Netmix audio framework has enabled us to rapidly deploy an experimental live multi-channel Fourier synthesis model. The framework has some inherent technical limitations, especially with regard to latency issues. Despite the fact that our performance is centered on spatial audio, unpredictable packet arrival and deployment indicate that our system could never be used for precise spatial placement when deployed on more than one computer.
Given the fact that our interest was in creating spatial diffusion effects which could not be realized with a stereo speaker setup (in addition to a particular model for performance control of resynthesis parameters), we are not too concerned with the temporal indeterminacy due to sound card and network latencies. In addition, the enveloping involved in Short Time Fourier analysis already blurs the transients such that latency effects from these sources are somewhat less noticeable. It remains to be seen if temporal indeterminacy in our system will prove a serious problem when applied to time-domain synthesis.