Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001 DISCOVERING EXPRESSIVE REALTIME PARAMETERS: THE PERFORMING LAPTOP ARTIST AS EFFECT DESIGNER Dan Trueman R. Luke DuBois Curtis Bahn Music Department Colgate University Computer Music Center Columbia University iEAR Studios Rensselaer Polytechnic Institute dtrueman@mail.colgate.edu luke@music.columbia.edu crb@rpi.edu 1. INTRODUCTION The laptop artist is a modern-day jack-of-all-trades; in a way unlike any time in history, composers, performers, instrument designers and engineers are often one and the same person. In this case, the feedback loop between musical experimentation and software/hardware development is very short, and can produce unusual, idiosyncratic results. Over the last several years, we (all "laptop artists" of this sort) have been active as performers, composers, software/DSP developers and hardware instrument builders. In particular, we have been seeking more compelling physical connections between body and digital sound, creating personalized hardware/sensor interfaces [1] [2], connecting them to custom signal-processing algorithms, and generating electronic sound through various kinds of speaker arrangements (both outside-in surround configurations and inside-out spherical arrangements) [3] [4]. In this paper, we focus on some of the software, signal processing and synthesis algorithms that we have worked with and discuss how our particular creative contexts have influenced algorithm design. We emphasize our search for "expressive parameters," algorithm parameters which we can imagine performing. We discuss how physical interfaces inspire this search, and in turn how the sonic results of a particular algorithm inspire new physical interfaces. We also discuss some of the music made with these algorithms (much of which has been released on several CDs over the last year [5] [6] [7]. Finally, we describe the publication of an open-source software toolkit—PeRColate—our primary software workbench that facilitates both low-level and highlevel development for laptop artists [8]. 2. SHORT/MEDIUM-TIME DELAY EFFECTS We describe two algorithms that manipulate short/medium time (10-1000ms) delay-lines. One, the "scrubber," is a delay-line scrubber. Constructing a clean algorithm (one that will not click or generate unwanted artifacts) which allows one to scrub (vary the playback rate) of a finite delay-line is tricky. Our solution involves using three rotating buffers: one for recording, one for playback, and a spare, which allows for clean crossfading during buffer changes (we describe this algorithm more completely in the final paper). While playback rate is clearly one of the algorithm's most expressive parameters, three other parameters that result from the algorithm's architecture are equally powerful: buffer-size, overlap time (the amount of overlap when switching and cross-fading buffers) and record ramp (the envelope on the record buffer) are all interesting to manipulate in performance. In our final paper, we describe some of these applications, and provide musical examples from our recent work. In particular, we illustrate several compelling mappings from physical interface to algorithm parameter. The "munger," a granular-sampling algorithm, uses a similar three-buffer scheme to generate clean granular textures from a delay-line. Like the scrubber, this algorithm has several obvious expressive parameters—grain-size, playback rate—as well as a few less obvious parameters. For instance, dynamically controlling the maximum delaylength, in the range of 10-1000ms (or more), allows the performer to expressively create a sense of "tightness" (or "looseness") around the input signal. Again, we describe particular musical applications of this and other mappings. 3. PHYSICAL MODELS As is well known, many physical models offer intuitive parameters for expressive control. We have worked extensively tailoring physical input parameters to the body, creating intuitive and kinesthetic connections to sound in complex multi-dimensional performance environments. Most of our efforts with physical models have favored pushing these expressive parameters outside of their "realistic" ranges, creating instruments or virtual players which are either caricatures of the original model or simply unrecognizable. We have also experimented with creating hybrid models which embody features of physically incompatible instruments. One particularly compelling example is the "blotar," a combination of the flute, electric guitar, and mandolin physical models, which takes advantage of the similar design of the flute and electric guitar models (in the final paper, we describe this model in DAFX-1 Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001 more detail, which is an elaboration of the model first introduced by Cook [9]. (reference DAFx98, Towards the Perfect Audio Morph? Singing Voice Synthesis and Processing)). In addition to the familiar expressive parameters of the original models, we discovered that a simple crossfade between the low-pass filter of the guitar model and the one-pole filter of the flute model offered unusual expressive possibilities (we refer to this kind of hack as an SMH: "Stupid Musician Hack"). As usual, in the paper and presentation, we will present several musical applications of this and other models. We have also made extensive use of Perry Cook's Physically Inspired Sonic Modeling (PhISM) algorithms[10]. In one application, models of several different kinds of "shakers" (maraca, bamboo wind-chimes, and guiro, for example) are "hung" from a violin bow that is fitted with sensors to control the models; using the bow, we can "shake" the models and dynamically alter their resonances. We further describe several applications and modifications of these algorithms for use with interactive works for dance [11] and experimental digital instruments for children [12]. 4. OLD SCHOOL and IDIOSYNCRATIC Applied in realtime, with various kinds of physical control, old fashioned signal processing algorithms, unmodified, or reworked in simple ways, can yield musically interesting results. Examples include ring modulation, "terrain", a 2dimensional wavetable scanner (which is particularly effective when manipulated with a graphics tablet), and "chase," a three-way signal comparator. For the laptop artist, who always has an eye on the CPU meter, these are cheap and effective and, while lacking some of the nonlinearity and richness of their analogue models, offer flexibility that was impossible with hardware. Also of interest (and examples of SMHs) are algorithms which arbitrarily take advantage of software system architecture. One particularly amusing example, "klutz," simply reverses the samples in the computation signal vector of the MAX/MSP environment; in this case, the signal vector size becomes an expressive parameter. In the final paper, we further illustrate these and other examples, and describe various musical applications and physical mappings used to perform these algorithms. 5. NEW TERRITORY: THE FREQUENCY DOMAIN The laptop has finally entered the frequency domain, and with it, the laptop artist. We are just now beginning explore the possibilities of manipulating frequency domain data in realtime with various kinds of physical controllers. Phase vocoding, naturally, is particularly enticing, and, while much can be learned from the deep legacy of nonrealtime work in the frequency domain, the context is different, as are the problems and possibilities—input- output latency is of particular concern, and causes problems when large FFT sizes are desired. But, more importantly, given that we imagine these frequency domain manipulations as realtime musical instruments that need to provide aural feedback given specific physical controls (imagine "bowing" samples in the frequency domain), the goals and possibilities are different for the laptop artist. In the final paper, we describe some of our beginning efforts in the frequency domain, and anticipate, come conference time, that we will have numerous musical examples to present as well. 6. PeRColate: AN OPEN SOURCE TOOLKIT PeRColate is an open-source distribution of a variety of synthesis and signal processing algorithms for Max/MSP [reference], and now NATO [reference]. It began around a (partial) port of the Synthesis Toolkit (STK) by Perry Cook (Princeton) and Gary Scavone (Stanford CCRMA). Like the STK, it provides a fairly easy to use library of synthesis and signal processing functions (in C) that can be wired together to create conventional and unusual instruments. Also like the STK, it includes a variety of precompiled synthesis objects, including physical modeling, modal, and PhISM class instruments; the code for these instruments can serve as foundations for creating new instruments (the blotar is one example) and can be used to teach elementary and advanced synthesis techniques. Since its first release (in February 2000), PeRColate has come to include many more objects not from the STK; some are from RTcmix and others are of our own design (like scrub, the munger, and the SID—Synthesis Isn't Dead—algorithms). In addition, a library of PeRColate NATO video processing objects has been created. 7. REFERENCES [1] Trueman, D. and P. R. Cook. “BoSSA: The Deconstructed Violin Reconstructed.” Proc. of the International Computer M, ausic Conference, Beijing, October, 1999. [2] Bahn, C. R. "SBASS: Sensor Bass." http://silvertone.princeton.edu/~crb/Activities/sbass.htm [3] Cook, P. R. and D. Trueman. "Spherical Radiation from Stringed Instruments: Measured, Modeled, and Reproduced." Journal of the Catgut Acoustical Society, November 1999. [4] Trueman, D., C.R. Bahn and P. R. Cook, “Alternative Voices for Electronic Sound:Spherical Speakers and Sensor-Speaker Arrays (SenSAs),” Proc. of the International Computer Music Conference, Berlin, Germany, 2000. [5] interface. "./swank." CD released by C74 Records. [6] Bahn, C. R. "r!g." CD released by the Electronic Music Foundation. [7] the Freight Elevator Quartet. "Fix it in Post." CD released by C74 Records. [8] Trueman, D. and R. Luke DuBois. "PeRColate." http://music.columbia.edu/PeRColate/ DAFX-2 Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001 [9] Cook, P. R., “Towards the Perfect Audio Morph? Singing Voice Synthesis and Processing”, Proc. Workshop on Digital Audio Effects (DAFx-98), Barcelona, Spain, 1998. [10] Cook, P. R. “Physically Inspired Sonic Modeling (PhISM): Synthesis of Percussive Sounds,” Computer Music Journal, Volume 21, Number 3, September 1997. [11] Bahn, C. R. and T. Hahn. "Streams." http://silvertone.princeton.edu/~crb/Streams/streams.htm [12] The JPMorganChase KIDS Digital Movement and Sound Project. http://music.columbia.edu/kids/ DAFX-3