<< Item Details |
Flies in the Face of LogicFlies in the Face of Logic Nick Didkovsky The Twittering; Machine is a collection of five pieces for computer-controlled piano. The titles are taken from paintings by Paul Klee, whose work 1 have always found to inspire the making of music. The suite was performed by an Amiga 3000 computer controlling a Kurzwcil PX1000 piano module. I programmed the software in I Hierarchical Music Specification language, an experimental computer music programming language created by Phil Burk. Larry Polansky, and David Rosenboom at The Mills College Center for Contemporary Music. The piano is played by two virtual hands. Each hand interprets a set of 12 changing parameters every half second. The parameters are: Pitch Mean and Range, Loudness Mean and Range. Vertical Density Mean and Range, Event Density Mean and Myhill Ratio, Legato Mean and Range, and Harmonic Complexity Mean and Range. By using these generalized notions of musical behavior, no specific pitches, loudness, durations, etc. are ever specified by myself. Instead, stochastic processes oversee the specifics of musical results: how quiet or loud a chord is played, what its root pitch is, how harmonically complex it is, and so on. U is in many ways a liberating way lo compose; using broad brushstrokes to create a form, and leaving the filling in of details to a virtual stochastic interpreter. A score for a piece is a profile of how these statistical parameters change over time. One five minute piece requires two lists of 600 parameter sets, one list for each "hand" (a total of 14,400 values). These parameter lists were generated using software and hardware. Figure 1 shows the event density profile for the left hand performing the tide cut of the suite. This profile was generated by a1/f sequence generator which exhibits fractal self-similarity. You can see the cliffs and plateaus of the profile being duplicated on varying scales. High plateaus correspond to a period of very dense performance (an average of many events per unit time), while low valleys correspond to sparser behavior. The cliffs are heard as sudden changes in density. Figure 2 shows the event density profile of the left hand performing "Death and Fire". This profile was created with the motion sensing UForce video game controller. While moving my hand through the motion-sensing field of the UForce, my software read eight numbers from the device every half second and mapped them onto various parameters like pitch mean, loudness mean, etc. You can see the inherent instability of the UForcc by looking at the profile in Figure 2. The slightest hand movement caused radical changes in the values the device sent to the software. As a result. "Death and Hire" is made up of dense, chaotic clusters of widely variable changes in pitch, density, and loudness. My software uses a Myhill distribution to oversee the interpretation of event density, after Charles Ames's “A Catalog of Statistical Distributions" (Leonardo Music Journal, No. 1 Vol. 1, 1991). An entry-delay approach is used to generate a sequence of events. Each event happens some time-delay after the previous event, which is calculated using the even! density' mean and Myhill Ratio. The event density- mean is interpreted as the most likely number of events per unit time (say, 7 chords or pitches in a four second time slot). The Myhill Ratio value is less straightforward. It oversees how, in this example, the 7 events are distributed over the four seconds. A ratio near 1.0 would result in the events being evenly spaced over lime (each event lasting exactly 4/7ths of a second; a perfect septuplet). As the ratio value increases 10 over 100.0, the placement of these 7 events becomes more and more uneven, approaching an exponential distribution. Values within the extremes allow for subtle rhythmic distortions and displacements. Running the software evokes the image of a performer realizing various versions of the same piece, as the use of probabilities ensures that the same lists of parameters will never be performed the same way twice. The versions included here are the "best takes" of the live pieces. Thanks to: Charles Amcs for his help in implementing the Myhill distribution, Phil Burk for JForth and 11MSL support, Thomas Dimuzio for post-product ion support, and Robert Marsanyi for providing me with the Uforce hardware and software hacks. [Nick Didkovsky • may 21, 1994 • new york, NY] C.W.Vrtacek The phrase "digital prepared piano music" prompts the question, how can an analog process (prepared piano) be transferred to the digital realm? A prepared piano is, after all, an acoustic instrument that hits been modified to produce sounds other than those intended by its maker. Ball bearings might be placed on the strings, pieces of wood jammed between them, contact microphones attached to the hammers, etc. This dramatically increases the sonic possibilities available to the composer or performer. So why not apply die same process to a digital piano? After all, a digital piano is, in essence, no different from a conventional acoustic piano. It may not look the same, but its intended purpose is to mimic as closely as possible the sound and experience of playing an acoustic piano. But what to substitute for strips of wood and ball bearings? Putting foreign objects inside a digital piano would likely damage the circuitry, Moreover, such actions would not change the sound of the instrument since the sound is produced electronically, not mechanically, rendering the exercise pointless. Nevertheless, quite a bit can be done TO modify the sound of a digital piano without relying on external sound processing equipment such as harmonizers, reverb units, etc. My pieces were composed on a Mirage IJSK-8 manufactured in the mid-80s by Ensoniq, a crude instrument by current standards, but digital nonetheless. The unit is equipped with a sequencer. Sequencers function like digital tape recorders, allowing the composer to store a passage of music, add to it, edit it. and so forth. The Mirage was connected to another sequencer, a Yamaiha QX-7. This permitted me to compose music on the Mirage, route it 10 the QX-7, edit it, and send it back to the Mirage. The process was repeated many times. In effect, this is a digital version of the old analog tape technique known as sound-on-sound (an example of technology coming full circle). The aforementioned process allowed me to create dense layers of notes. Another feature I employed was the sequencer's tempo control, which is a straightforward transformation, [t permits the composer to speed up or slow down a passage without altering pitch, sustain, attack, etc. This made it possible to create passages of extraordinary difficulty, were they to be performed live. The single most important feature used in preparing my pieces is a function known as step-time. Step-time permits the composer to select a time signature (4/4, 12/8, etc.) and assign durations (1/4 note, 1/8 note. etc.). I chose uncommon time signatures such as 15/8 and 9/16, then assigned the same value to each note played into the sequencer. Passages originally played in real time were transferred to the sequencer and bludgeoned into rigid submission. It sounds cruet, hut the result is often humorous. The tempo control was adjusted during playback. This produced a mechanized feel as thick slabs of music came out of the sequencer in a dizzying stream of quarter or eighth notes. If this sounds vaguely familiar, it is because I was inspired by Conlon Nancarrow's music for player piano (again, technology marches backward, so to Speak). Composers who choose to work in non-commercial styles such as I have are often accused of being 'out of touch' with their audience, perpetrators of some academic joke. But technology has now altered the playing field. Prior to [he mass marketing of pre-recorded music, all sound was ephemeral- if the tree fell in the forest and you weren't around, you didn't hear it. To hear music, you had to be present during a performance. Music was a communal experience. Today most of us do the majority of our music listening in cars, homes or offices and almost always in solitude. Personal stereos have created a situation 180 degrees removed from the old paradigm. Instead of crowds of people hearing the same music at the same time, we find crowds of people on buses, in airports and in waiting rooms simultaneously listening to different pieces of music. Pre-recorded music permits the listener to replay a selection over and over. The listener hears the music exactly the same way cacti time. Music is no longer ephemeral. The listener thus runs the risk of inadvertently turning into a music analyst who spends more time evaluating a recording than experiencing the music itself. Talk about your zany ironies - phew! [C.W.Vrtacek • jamjwy. 1994 • guiu-okd, CT] Steve MacLean I'd rather hit or play something these days than sit at the computer for too long. My interest in this project was to see what kind of piano orchestra/rock band I could create by taking apart an old upright. I used a single U87 mic with a little compression and EQ going [o DAT tape while I struck, plucked, muted, scratched and prepared individual sounds, paying attention to any sympathetic sounds from the instrument and the room. I explored the piano while my DAT captured the results. By accident in.: dropped the dime I was using for some of the scratching inside the instrument. It bounced, then settled. I knew 1 was getting somewhere. I could look down into the piano and predict certain probabilities of the dime hitting something that sounded interesting - I began placing things in its path to allow for more variations. Many of the sounds I recorded to DAT were then digitally sampled and used to create my pieces. You can hear the dime drop 1 chose in No. 1 and No. 3. It is satisfying to edit raw material that was at one time played. It is a very organic process. Some sections of No. 3 use Dr. Ts PVG module to create variations on the opening line based on a set of parameters defined for pitch and rhythmic figures, but I generally do not gravitate Toward these compositional devices. In this case you have a distinct solo line and can hear clearly what is becoming of the line. The listener can be pulled along in this way, and is not merely dumped into the middle or end result of a process in which they have no reference for what they are hearing. About "Piano Drop": I was invited to perform some electronic manipulations on a piano dropped from a 100' crane for the New Music Across America Festival, 1993. The piano was stuffed full of glass and other objects (unfortunately) and a wireless mic (fortunately) - the mic proved useful for hearing the descent to the ground, but the impact was less exciting. There were 2000 spectators in the parking lot to witness the event: a large upright piano swirling way up there and all these expensive mics placed around the impact area. After the drop, the crowd moved in and began milling through the wreckage, adding to the sound, as the mics were still working. The crowd altered the outcome of the whole piece. This reminded me of another event where I experimented with rolling marbles on a large PZM miked hardwood floor projected in stereo to an audience of 400 people. About three minutes into the piece the audience got involved by throwing whatever they could find into the performance area: paper airplanes, programs, and stray marbles. Soon we were all in a state of chaos thick with noise, yelling, and laughter from all directions. The piece ended with everyone feeling very satisfied. A brief comment on each piece before I'm off on another adventure. Ail sounds in these pieces were created from the source piano. No. 1 sets the mood for listening and establishes the 5 over 4 feel- No, 2 features a one siring SynthAxe gliss piano solo. No. 3 gets back to the dime drop with development in all directions. No. 4 is a mirror image piece with a snappy interlude. [steve maclean • february 14, 1994 • portland, ME] Produced by Nick Didkovsky • Mastering by Roger Selbel ai SAE - All composition copyrights held by their respective composers, and registered with BMI * C.W" Vnacek's sections realized on The E-mu Emulator by Proteus. courtesy of Nicolc LePera • Piano Drop recorded by J. Elnier, used with permission. Contact: Nick Didkovsky/Nerveware. 118 East 93rd Street, Apt 9C, NY, NY 10128, e-mail 7225O.3313@compuserve.com • C.W. Vrtacek/Leisure Time Records. 329 Lake Drive, GuiLford, CT 06437 • Sieve Madfan/Mobius Records, P.O. BOX11035. Portland, ME 04104 -* & © 1994 Pogus Productions |