December 19, 2006
IN WHICH THE AUTHOR LEARNS ABSOLUTELY
ALL THERE IS TO KNOW ABOUT THE WORLD
OF ELECTRONIC AND COMPUTER MUSIC -- OR NOT.
Barbara Jazwinski stops in mid-sentence. From somewhere on the other side of her office we hear the whiny sound of the world's tiniest concerto. It's the ring tone of her cell phone, the device's petite speaker rendering some of life's most beautiful music -- digitized, compressed and downloaded -- into little more than an irritating alert. Roll over, Beethoven, and while you're at it, e-mail Tchaikovsky.
Writers enjoy irony as much as musicians like counterpoint, so I can't help but sit here, as Jazwinski takes the call, and contemplate the fact that only a minute ago she was speaking so passionately, eloquently and lovingly about the power of music and how electronic and computer-generated sounds will be integral to the musical composition of this and outgoing centuries -- only to be interrupted by this cheesy little noise. Ludwig, Pyotr and I, apparently, have a lot to learn.
And despite the advantage I hold in actually living in the 21st century, I'm getting the queasy feeling that I have a lot more to learn than they do. It is late August and Jazwinski, chair of the Newcomb Department of Music, and her campus colleagues are anticipating the November arrival to Tulane of the International Computer Music Association's annual conference.
For the first time ever, the conference will be held in collaboration with SEAMUS (the Society for Electro-Acoustic Music in the United States). Additionally, this is the first time that the conference, which is held on a different continent every year, is headed to the American South. So this is a Big Deal, especially when you consider that ICMA organizers had to remain committed to New Orleans after Hurricane Katrina blew the city into a decidedly minor key.
At the moment, I'm having fun talking to Jazwinski about computer and electro-acoustical music-making, but am beginning to wonder if the premise for this story, which roughly has to do with how "computer music" relates to "traditional music," is now being shot to pieces by the irksome intrusion of reality -- and no, I'm not talking about the cell phone.
Jazwinski is echoing what a couple of her colleagues have already told me -- that electronic- and computer-based hardware and software that are used to compose, synthesize, record, process and reproduce music are a natural and necessary step in the evolution of music that dates back to the first rhythmic clap of stick to stone. Asking how a computer relates to the production of music is rather like asking how a drum, flute or piano do so. Well, you know, you play them. End of story. Or it would be if there weren't so many interesting things to talk about.
Let's skip back in time a couple of days and here I am, sitting with Tae Hong Park, an assistant professor of music who has been at Tulane for about two years. "I arrived with Ivan," Park tells me, and I'm thinking this is another member of the faculty I should interview. "Hurricane Ivan," he clarifies. Oh, yeah. I'm such a traditionalist. I remember the days when the name of a particular storm was not part of our formal introductions.
Along with having a good sense of humor and interesting ideas about music, Park serves as a kind of living metaphor for where music may be heading. He is the Renaissance Man redefined for the new millennium, where the distinction between left- and right-brain thinking is going the way of the floppy disk or the rackett, a somewhat irritating woodwind instrument that went all but extinct about 400 years ago.
Park, whose efforts were largely responsible for bringing the ICMA conference to Tulane, sits at the console of the music department's new electro-acoustical studio, rather like Captain Kirk on the bridge of the U.S.S. Enterprise.
The room surrounds us like a pod; we are encircled by speakers, and the cool glow from a thousand light-emitting diodes creates a soothing and contemplative geek firmament.
Park and I scan the room as he describes the soundboard, audio and video monitors, and various stacks of digital musical-processing equipment, but I wonder if we really are seeing the same stuff in the same way. Park has the mind of an electrical engineer and a deep understanding of how to coax sound from all this circuitry. I'm more like a customer browsing the counter at RadioShack. Park, in fact, was educated as an engineer at the Korea University years before he embarked on the formal musical training that would lead to his PhD.
"I am a composer and I have been trained as a scientist and engineer," says Park. "It is my goal to do both because they are very interesting to me." The quirky symbiosis between science and art is at the heart of his "Music and DSP" (digital signal processing) course, a core course in the department's music, science and technology program.
Students learn about the physics of sound and how to tweak the physical nature of sound though software and hardware processors. It's all about how sound waves behave and how they can be manipulated by programming and developing various algorithmic processes to twist, stretch, compress, pull apart and put together audio signals. I pose the question that is at the heart of my story.
"What is the difference between traditional and electronic composition?" I ask. "Good question," says Park, and I'm already glowing with self-satisfaction when he continues, "I don't think there is much of a difference." Oh. "As a composer," he says, "you have this product that you are looking for, and whatever you use to get there is what you will use. Maybe the techniques involved in using computers and electronic media are different, but it is the musical product you are looking for."
Hold that thought for a moment while I digress somewhat. In my conversation with Jazwinski she talked about what makes a good musical "product." An important component of that is how music moves through time. "If a musician is not able to sustain a high degree of interest throughout the piece, that means something went wrong. The tension was allowed to drop too far and it is very difficult to get the piece moving again." A good work of music, she said, is well-constructed in terms of its dramatic unfolding. "Imagine reading a whodunit novel and finding out the culprit on the second page. It makes no sense to read the remaining 200."
All of which is making me acutely aware that, like a pop song or Gregorian chant, this very story that you are reading is playing in real-time. Doesn't matter that it was assembled in fits and starts inside a word processor over the course of several days. The end product of a story, if it works, will have the musical qualities of rhythm and tempo, along with the ability, as Jazwinski notes, "to introduce people to a concept and move it from one point to another." We are almost back to Park, who was just speaking about the techniques in using electronic media, but if I don't orchestrate a little background information here, you may stop tapping your feet and lose interest.
The applications of electronic and digital technology in media are just this side of infinite. There are synthesizers that use frequency modulation and other algorithms to generate extraordinarily rich and complex sounds. There is hardware that stores and plays samples of recorded sounds. There are software programs that are used to record sound -- it could be a clarinet, French horn, barking dog, human voice, stopping streetcar -- directly onto a computer's hard disk, as well as software that can virtually replicate a full-fledged musical studio.
Other software can create an interactive environment for musicians to have a call-and-response relationship with computer-generated sound live on stage. Jazwinski is fond of software called Sibelius, which assists her in scoring compositions. So when Park says, "Nowadays everything goes through the computer; there is a lot of processing going on," you can see what he means. The application of all this technology ranges from the mundane to the sublime.
For instance, at a very simple level, a musician can use a keyboard and computer to play and record something that sounds almost as if it was being performed by the string section of an orchestra. In another example, rap musicians have for years sampled bits and pieces of already-recorded work and orchestrated them into new and original music. As for Park, he is working in the genre of art music, where, in his case, technology, conceptual thinking, timbre, emotion, rhythm, psychology and who-knows-what else meld and jostle each other in a cerebral, visceral mix of sound.
He presses a few buttons and calls up a piece called "t1," where the melody line of a muted trumpet peeks in and out of a whooshing, sometimes roaring texture of sound. It's challenging and exhilarating to listen to, and unlike anything that has ever reached my ears. And that's extraordinary, if you think about it -- to hear sounds so new and different that you may as well have your headphones plugged into the planet Neptune or an ocean cave. In another piece, "Omoni" (the Korean word for mother), Park digitally assembled snippets from hours of interviews of people talking about their mothers.
"Omoni," however, is more than a sequentially arranged documentation of comments. The voices are set in a three-dimensional sonic environment, appearing from and disappearing into the depths of the piece, sliding by each other as they drift in an out of our hearing. In this aural montage, phrases such as "I love you" and "please call me" play off each other, slowly constructing an emotional tapestry. It's both poetic and musical, but you won't be hearing it on your FM dial.
"Speech is especially interesting source material for musical creativity because speech is pitch-based," says Park. "I tried to extract different musical parameters from these samples and also synthesize certain things."
Park, whose primary instrument is the electric bass, says about 60 percent of his compositions are electronic or electro-acoustically based, with the balance being purely acoustically based work. He acknowledges that there is always the risk of allowing the bells and whistles of technology to overwhelm the compositional and artistic aspects of a work. "What I teach my class," says Park, "is my philosophy that the musical product, be it electronic or non-electronic, is successful when the machine becomes transparent." Interesting point, I reply, and make a note that he said "non-electronic" instead of "acoustic."
Electro-acoustic really means music coming out of speakers," says Paul Botelho. I like the clarity and succinctness of this definition. Botelho is a visiting professor who arrived at Tulane last January. He's working on his dissertation, which examines the effect that different recording production styles, recording equipment and recording media have had on music.
Among the things he is teaching at Tulane is "20th Century Music Theory." "The rules that govern pitch durations, pitch timbres, everything went out the window in the 20th century," he says. "It really took music into surrealism."
Botelho plays guitar and piano, but his main instrument is his voice. "I do a lot of extended vocal technique, which means looking at the voice not so much as you do in traditional singing but rather to get lots of different sounds and timbres."
I nod in the kind of way that is intended to mask cluelessness and respond, "Oh, 20th-century kinds of stuff." "It's 21st-century stuff," he laughs. Cool. I ask him what defines music and he echoes themes shared by his colleagues. "A certain amount of brain and a certain amount of heart," he says.
Botelho works with mostly electronic media, using recorded sounds of instruments or his own voice as building blocks and largely staying away from synthesized sounds and, oddly, rhythm. His training in technology, at least initially, was somewhat less formal that Park's.
"My dad bought me a Commodore 128 when I was 11," he says. "But he didn't buy me any software. So I started programming, making little games and then sound experiments. That was the beginning." Later, in college, a teacher would tell Botelho that when working with computers, 90 percent of the discovery comes from mistakes. "I love mistakes," he says.
To my mind, Botelho's appreciation of the "happy mistake" that leads to understanding is entirely consistent with his dissertation on the relationship between recording technology/process and music. In any cause-and-effect dynamic there are intentional and unintentional consequences. I ask Botelho if he thinks there are similarities between the repercussions caused by today's electronic media and, say, the advent of the piano some 400 years ago. "I'm sure there are still some people railing about how the piano was a travesty to music," he laughs.
He notes that at the beginning of the 20th century a much larger number of people played the piano than they do today because that was an important way to have music in the home. When audio recordings became popular, society as a whole became less musical, relying on the recordings for entertainment. In recent years, however, ease and utility of computer and electronic musical technology are encouraging more people to learn, perform and experiment with music.
Botelho's technology-crammed office is tucked away in the inner sanctum of Dixon Hall, up some stairs, down some stairs and around a few corners. I worry about finding my way out, but before I leave I ask to hear something he's composed. He reaches for a DVD of a one-act opera entitled "the light" that he wrote and staged. "It's based on a dream I had. The story is that we are time travelers and we are slowly waking up over the course of 10 minutes. Our muscles haven't gotten to the point where we have learned how to use them yet...." Strange stuff.
The imagery on the monitor flickers a stylized and surreal scene and the tiny office fills with a texture of sound that groans and shimmers in an unworldly way. When the characters talk, their voices grind in a harsh, inverted, metallic timbre. This is not what comes to mind when I think of opera. "It's all in the listening," says Botelho.
When Barbara Jazwinski wants to demonstrate a musical idea to me, she doesn't fiddle with knobs or switches. Rather she goes over to the grand piano in her office and plays a few bars of Chopin. Which isn't to say that she is a conservative traditionalist or acoustical purist. In fact, it is under her leadership that the department's music, science and technology program has flourished.
"I've been working with computers for a long time and consider electronic music absolutely essential," says Jazwinski, who is an internationally recognized and highly sought-after composer. "The issue, of course, is talent. You can create anything with technology, but when you have so much freedom it requires an enormous amount of talent to know exactly what to do. Technology is one thing. Imagination is another."
And while important works are being written for and through electronic media, it will be a while before the world hears consistently successful works.
But that's the way it's always been with new musical instruments, Jazwinski says. Instrument makers develop and then refine their product while composers experiment by writing pieces for it -- first simple compositions and then increasingly more complex.
The modern piano was born at the turn of the 18th century and it wasn't until 1742 that J. S. Bach's "Well-Tempered Clavier" ushered in the period of "glorious writing for piano," says Jazwinski. And who knows?
Perhaps one of the 350 conference attendees who will be coming to campus in November will have the equivalent of Beethoven's Ninth or Tchaikovsky's Sixth neatly stored to his or her flash drive, waiting for the chance to upload to the world something extraordinary and enduring.
As for me, I've been up nights lately, noodling on my home computer, headphones clasped over my ears, knowing that the issue, indeed, is talent, yet encouraging myself that writing a good piece of music can't be all that different from writing a good story. Can it?
Nick Marinello is a senior editor in the Office of University Publications and features editor for Tulanian.
Tulane University, New Orleans, LA 70118 504-865-5000 firstname.lastname@example.org