Parametric Gesture Curves:
A Model for Gestural Performance
| Stefan Müller |
| Universität Zürich, Institut für Informatik |
| s.mueller@computer.org |
Ongoing research in performance theory integrates methods for complex instrument parameter spaces and models for musical gestures. This article discusses a model of symbolic and physical gesture spaces. It further introduces a proposal for a symbolic gesture space well-suited for keyboard-like instruments and focuses on the algorithmic construction of symbolic parametric gesture curves which represent musical gestures in that space. The theory presented in the article is designed in order to lift the functionality of RUBATO’s PerformanceRubette to the gestural domain. |
1 Introduction
The RUBATO music workstation, first presented by Mazzola and Zahorka (1994), contains a software module called the PerformanceRubette, which is able to perform musical scores based on a number of user-controlled analyses (e.g. metric, harmonic, motivic, etc.) on them. A performance is calculated in terms of a performance transformation
from a symbolic score space
to a corresponding physical space
. The score space
is generally built of six instrument parameters: onset
, pitch
, loudness
, duration
, glissando
and crescendo
. The output of the PerformanceRubette is typically a MIDI file of the resulting performance. The MIDI file can then be played on a built-in MIDI synthesiser or on a external MIDI device. However, we realised that a performance transformation based on the parameters given above is too restrictive as a model for a more realistic performance; particularly with respect to sound quality. For example, in the case of the violin, it is impossible to specify plucked versus bowed notes with above parameters. Furthermore, the whole area of musical gestures was omitted in the original PerformanceRubette. Therefore recent and ongoing research in performance theory at the MML focuses on how instrument parameter spaces can be extended and how concepts for musical gestures can be incorporated. This includes direct sound synthesis from given physical parameters, and visual representation of musical gestures by means of performing avatars.
Consequently, the model of performance transformations has been extended with lifted gesture spaces and a gesture transformation, as introduced in
Mazzo la (2002a). Those gesture spaces contain parametric gesture curves that represent musical gestures: the symbolic gesture curve describes the movements of