Sunday, December 6, 2009
This morning, I had the great pleasure of reading the following article by composer Annie Gosfield.
While many composers are asked this question on a regular basis, I found Ms. Gosfield's highly articulate and overall atypical response to be quite refreshing. This article should be required reading for all student composers out there.
Thursday, December 3, 2009
Allow me to share a little secret with you: I hate making decisions. More specifically, I hate the nervous energy, the anxiety and, the often overwhelming pressure that accompanies decision-making. However, I thoroughly enjoy having MADE a decision. There is a tremendous amount of satisfaction that comes from having finally decided what to do and/or how to proceed in a given situation - especially if it turns out to be the "right" one (although even the "wrong" decision can sometimes lead to serendipitous results!).
So, it is ironic that I find myself a composer, which at its most basic level is an art comprised completely out of decision making. These decisions occur at all levels of the composition process: Do I compose for a string quartet or a saxophone quartet? Do I start fast, slow, or somewhere in-between? Will my first note be a B, a B-flat, or any of the other ten notes available to me (assuming I've chosen to use a well-tempered tuning system)? So. Much. Pressure!
The above examples, though, are somewhat superficial when compared to the big decisions that a composer must inevitably face when in the depths of the creative process. The big question that I am eluding to here is, quite simply: "What happens next?" This is a question that we all wrestle with frequently when composing. We write a section of music, and then find ourselves stuck trying to figure out where the music should go in the next section. Should the music change? Should it repeat? Should it extend what has already happened? Should it introduce a new idea? These are all questions that we all might have when reaching this moment of compositional indecision.
In actuality, we really only have two concrete choices to make as composers when we get to this point:
1. Do I develop what I currently have?, or
2. Do I contrast what I have with something different?
This fundamental dichotomy - to continue or to change - is the basis upon which all other decisions stem from. For example, if one were to continue onward with a new section that functions as a continuation of the previous section, then further decisions need to be as to what developmental techniques and processes should be used. Conversely, if one were to change up the music with a contrasting idea, then additional decisions need to then be made as to what changes need to be made to ensure ENOUGH contrast is established. It is important to note that both of these paths begin with the original idea, and that even if one chooses to contrast their original idea with something new, that new idea should still be linked to the original as a RESPONSE to it.
Both Finale and Sibelius offer the composer a unique tool in dealing with this decision making process: the ability to "audition" several different approaches before choosing which path to take. Using playback, one can compose several different "paths" for the music to take, and then audition each one in turn. While I often caution students to not overuse playback, using it as a way to hear multiple variations of an idea can be quite useful when trying to make a concrete decision. The danger here is that if one listens to the SAME idea too many times, it is possible to convince yourself that this is the only path for the music - all other ideas will begin to sound incorrect, even if they are in fact better choices. Avoid this by listening to all possibilities equally until a decision has been made.
Sibelius offers a second tool here that can greatly aid in the decision making process - the "Ideas" panel. I mentioned a while back that I had yet to use this tool, but always believed that it could be very useful given the right circumstances. Having now incorporated it into my work flow, I now firmly stand by that belief. It is all too often that I will create a musical idea or motive without knowing exactly where this idea will fit. By placing these musical fragments into the ideas panel, I can now streamline my decision process by referring to the ideas panel whenever these key decisions arrive.
Either of the aforementioned decision paths - to continue or to change - can be assisted through this tool. Assuming that I would at some point want to continue with my existing idea through development, I will always ensure that the ideas panel contains my original musical motive as a reference point (a practice which I recently began and now do with every piece that I write). This can be especially useful if my music has developed to the point where the original motive is almost unrecognizable. Having a convenient location to reference this motive is incredibly useful, and serves as a reminder for me as to where all of this music originally stemmed from. Likewise, assuming I will eventually want to contrast my current music with something new, I will also ensure that my ideas panel contains additional motives and concepts that I came up with in the early sketching stages of the piece. Using the ideas panel in this way has the added benefit that all of these ideas can be auditioned within the panel itself before bringing them directly into the music.
While none of these tools will replace the composer's responsibility to actually make the decision, being able to audition these ideas through both playback and the ideas panel allows for the composer to possibly make a more educated decision - especially when combined with good compositional technique and a good ear! In the end, it is the composer's duty to ensure that several possible outcomes have been considered at each key point in the music before a proper decision has been made. A decision doesn't need to be rushed, and while a little "analysis paralysis" might occur as a result of too much consideration, making a quick and hasty decision is a far worse possibility. These compositional decisions are what will inevitably separate a good piece of music from a great one, even if at times they can seem agonizingly difficult.
Thursday, November 26, 2009
In the meantime, have a great holiday and Tofurky!
Thursday, November 5, 2009
As a composer, I seem to spend quite a bit of time at the airport. Not traveling, necessarily - but rather sitting and waiting to travel. Ironically, I will be spending more time today at the Fresno Air Terminal (which bears the unfortunate abbreviation of F-A-T) then I will on my actual flight up to Portland.
Of course, with all of this spare time waiting, it would seem like a great time to crack open the computer and compose a little bit, right? Unfortunately, as a composer who likes to have a very specific set up when working, the end result is that my ability to compose (at least in Sibelius) isn't very portable. I have my laptop, yes - but I don't have my keyboard controller (nor do I own one that is truly portable), and - oddly enough - my laptop isn't the computer that I use to compose on anyways. I know - strange. But, that is what I am comfortable using, and I am not about to change that any time soon.
This is why I bring a print out of anything that I am working with me when I travel. Composing on the computer may be my main compositional approach, but when I am on the road I like to work on my music the old fashioned way, with pencil in one hand and a big "fat" red pen in the other. The music that I write during this time ends up getting sketched into the blank measures that are at the end of the score, often resulting with many bizarre scribbles, scratch outs, and pictures that might be more like hieroglyphics than music notation. Additionally, editing seems like a much less painful process for me during this time, and as a result the red pen ends up getting quite a bit of use as notes are changed, stripped out, added in, and transformed. Upon returning home, I often will find myself with an inordinate amount of new material, as well as edited material, all to translate and incorporate into my digitized score.
The strange truth is that by doing this, I believe that my music becomes all the better for having gone through this process. Looking at the printed music on paper, editing what I have with a nice RED pen, and writing new music by hand - even for just a brief period of time - all seem to help me gain a new perspective on my music that I wouldn't have had if I had composed it out in its entirety on the computer. It could be as simple as the temporary change of venue, but I honestly believe that by forcing myself to look at my music using different methods, I end up creating a better piece.
I often muse to myself that I should bring my music "on the road" with me more often - even if it is a simple road trip to the California coast. The truth is that I really do enjoy these brief periods where I look at my music in a more "traditional" light. The change of perspective isn't just helpful - it is needed. It serves as a "reality check" of sorts for me, to ensure that my music hasn't become some sort of computerized monstrosity.
For those of you reading, I might suggest that you find your own way to allow yourselves these brief "computer" vacations - where you break out the pen and pencil yourself and work on your score free from the trappings that these programs can occasionally thrust you into. This is especially so if you find yourself like me, trapped in one specific location and unable to compose anywhere but your own personal workstation. Occasionally allowing yourself these moments may translate into a unique new idea for your composition, a new perspective that simply wasn't evident before, or complete new music that you wouldn't have come up with any other way.
So, if you'll excuse me, I am going to break out my printed score now and sit at the airport bar for a while - red pen in one hand, and a nice black coffee in the other. How romantic.
Thursday, October 29, 2009
This blog is a must-read, if only because it allows us a chance to further understand Mr. Adams' unique perspective of the world. I have enjoyed reading each and everyone of his posts, and invite all of you to do the same!
Thursday, October 22, 2009
It is amazing to me how quickly this Fall season is passing. Just this past week, we had our first tule fog here in the San Joaquin Valley, meaning that Fresno's relatively mild Winter is just around the corner. Only three weeks ago, we had 100+ degree temperatures. To me, that just seems like bad pacing. Surely it wouldn't have hurt to spend a bit more time in Autumn, with its seventy-odd degree temperatures, clear smogless skies, and beautiful mountain vistas.
Believe it or not, this mindless diatribe about the weather is actually relevant to my discussion today about musical form. For me, musical form is directly related to two distinct topics: pacing and musical contrast. Like the seasons, a good piece of music should contain distinct, tangible changes from section to section in order to be identifiable as new sections. Likewise, the pacing of each section should be adequate enough to ensure that the listener has had "just enough of the good stuff" (as opposed to the aforementioned Autumn that simply wasn't long enough!).
Before I go too far on this topic, I need to confess something first: I am not a huge fan of ABA structures. This standard approach to form - one of statement, contrast/development, and restatement - is, in my opinion, overdone. Clearly, there is a good reason for the prevalence of this structure, as a typical ABA piece will allow for a consistent and transparent, albeit overused, musical rhetoric. However, for me personally it is far more interesting to develop a musical rhetoric that avoids this model. AB structures, for instance hold a tremendous amount of rhetorical possibility by simply omitting any sense of return. Each section becomes its own individualized journey, leading you to various "points of no return" along the way. This does mean that your individual A and B sections need to be larger and more fully developed by comparison, though, and that is where we come to the topic of pacing and form.
Probably the single BEST use of MIDI playback is its ability to give the composer a very quick and concise understanding of the music's pacing. Even though the timbres and balances of playback are not the best representation of one's music, the sense of pacing that one gets from MIDI playback is surprisingly accurate. Using playback, it can very easy to tell whether a section of music is too short, too long, or "just right." This helps out the composer immeasurably when trying to shape the form of the piece, as it can be very evident when it is either time to move on to a new section, or when the previous section needs further development.
The key to this, though, is practice. Understanding one's own musical pacing is a learned skill, and in order to improve this skill, a composer needs to constantly check and double-check whether or not their is ENOUGH music. More often than not, the young composer errs on the side of rushing through a section, rather than having too much. In fact, it is quite difficult - although not impossible by any means - to have TOO MUCH music in a section. Finding the pacing "sweet spot," that is the point where the section of music feels just right, is as much a learned skill as learning to compose on the whole or learning to play an instrument. Playback will help with this, but the composer must develop their own sense of musicality in order to truly be able to comprehend what they are hearing, and thus make good choices as to the amount of music that is needed in a given section.
Oddly enough, knowing how to compose a well-paced section of music is actually the beginning step in learning how to create a good form in Sibelius and Finale - not the end goal. Once this skill is developed, the composer is suddenly armed with a tremendous amount of resources available to them. The composer can INTENTIONALLY CHOOSE to shorten or lengthen a section for emotional effect. It is quite jarring to establish a well-paced section of music, only to intentionally cut it short - interrupt with another section of music. Likewise, the composer can stretch out an already well-paced section to the point where it becomes monotonous to create a sense of obsession or endlessness. It is at this point that the composer is able to take control of their own form, and break free of traditional conventions like ABA or simple song forms.
Sibelius and Finale are incredibly good tools for this process. As stated above, the composer first needs to be able to develop their own sense of pacing - either with or without the assistance of MIDI playback. Afterwards, the composer can then use tools such as "Copy/Paste," and the "R" key (Sib.) to establish repetition as a point of rhetoric. As I discuss in my prior blog posting "Rinse and Repeat (July, 2009), repetition on its own can be a great tool as long as enough variation is included in the process. Using these tools of repetition, the composer can chop up, extend, interpolate, fragment, and otherwise completely disassemble and reassemble previously created sections of music to create a related, yet still-contrasting musical section. Essentially, this allows for the composer to develop their music using the tools natively embedded in music notation software.
Eventually, though, true contrast is needed for a piece of music to continue forward. This can be done one of two ways - through continuous development, or through the introduction of contrasting material. Either of these are valid methods, as they both allow for the creation of a true "B" section - that is, a section of music that is motivically and harmonically distinct from the "A" material. At this point, I often choose to mark this point in my Sibelius score, usually with a double-line. This helps to identify the new section as separate from the old - a point where the music goes in a new direction. Other elements - tempo, dynamics, texture, etc. - will likely change as well to help reinforce this new section.
Taking a step back from this process, it can sometimes feel overwhelming to comprehend the entire form of a piece while looking at only a few measures at a time. It is at this point that the composer needs to literally get a "bird's-eye" view of their entire piece, either physically (through print-outs) or virtually (by zooming out on the computer screen). I (as well as many of you readers!) give a few suggestions on how to deal with this in my post "The Big Picture (July, 2009)." I'm not going to rehash my comments, but rather simply state that it is important to find a method that allows one to get a hold of the entire piece in one viewing - literally, all at once! Even if the notes are indistinguishable from one another, seeing the shapes, the relative "darks and lights" that come from black and white notes, and the presence of space in the piece are all incredibly useful in understanding how the form of one's own piece works. Once this has been done, decisions can then be made about if any one section needs to return further, if one section predominates too much, or if the form even makes any sense!
Having said all this, I would like to return to my initial comparison of pacing and the weather. Although we may differ on what kind of weather we prefer, one thing that most of us recognize is that too much of any one weather pattern - regardless of whether we like it or not - can become quite oppressive over time. Likewise, a weather pattern that is here and gone in too short a period of time can feel unsatisfactory and fleeting. Musical form operates the exact same way. As the composer, one should take the time to allow an enjoyable section of music to go on for just enough time so that it feels satisfying, but not so long that it becomes oppressive.
Now, if only we could have some more rain.
Tuesday, October 13, 2009
As the festival director, I will be incredibly busy this week. So (here it comes...), the Electric Semiquaver's regular blog posting won't be back until next week. I hope to be able to post with greater regularity beginning with next week's post, which incidentally will be on applying and understanding FORM within Sibelius/Finale. Until then...
Thursday, October 1, 2009
It seems that one of the most misunderstood concepts in music theory is what is meant by "musical texture." More often than not, when I ask students about the texture of a specific piece of music, I am often greeted with a well-intended, but completely inaccurate response along the lines of "The music is soft and rubbery," or "Beethoven created crunchy music!" (I kid you not - these are REAL quotations from papers I have received).
So, before I go any further, I would like to point out that texture, in the context of music, specifically refers to the number of musical voices along with their relative functions to one another. Specific textural terminology include "monophonic" (one voice), "polyphonic" (multiple voices), "homophonic" (same voice), "heterophonic" (different voices), etc. These terms can be further divided into several sub-categories. For example, a polyphonic work is often assumed to be contrapuntal - however, any piece of music that has an independent melody, countermelody, and a corresponding accompaniment can fit this definition as well.
Now that we have our definition out of the way, lets move on to how music notation software directly impacts the composer's understanding of musical texture. Allow me to begin with an assumption (always a dangerous thing!): It can be assumed that, when composing polyphonically, there may come a point where even the most gifted of pianists would be unable to play every simultaneous line of music present in a given orchestral or choral work. Ligeti's use of micropolyphony, for example, would be impossible to perform on a single instrument due to the sheer number of voices occurring at any one time. Additionally, the very nature of these lines existing independently of one another would make it impossible for one brain to process all of these lines at the same time. Of course, this would never be the case in Ligeti's music - it isn't one brain, or one performer, that is tackling all of these independent lines, but rather several working together to create the music (of course, one could argue that one single brain - the conductor's - is holding it all together, but the conductor's role is not the same as that of the performers, and as such isn't processing the same information in the same way!).
However, in the case of music notation software, we are in fact dealing with one brain and one performer - the CPU. A single computer chip is capable of processing far more musical information than any single brain can (or should!) process at any given time. This means that, regardless of complexity, a computer will be able to play as many musical lines as its CPU, RAM, and Hard Drive will allow with flawless accuracy. This allows for limitless possibilities in the realm of contrapuntal density, including: canons at the 16th and 32nd note, infinite numbers of independent non-canonic lines, complex rhythms that exist in counterpoint with each other in the same instrument (different hands), etc. Using playback, a composer can audition and hear any combination of contrapuntal lines without needing to worry about PERFORMABILITY.
The obvious downside to this is that, although the computer might be able to play all of this, human beings often cannot. This goes beyond the relative difficulty of a single line of music - in fact, often it is the case that relatively simple lines of music can become unbearable difficult to perform when placed in counterpoint with one another. There are actually two problems here. First, in the case of instruments that are capable of playing contrapuntally (i.e. keyboard instruments), counterpoint composed in music notation software is often written in such a way that it becomes near impossible for one performer - one brain - to comprehend the music. Admittedly, this is true for music composed both in music notation software, as well as music composed by hand. Take for example...well...any Bach Invention. Each individual line is, by itself, not that difficult to perform. However, when placed together in realtime, the performance difficulty spikes! (Let's not also forget the Ligeti Etudes - specifically, Etude #1: Desordre - which I have been told by many performers requires the pianist to simply think of each eighth note as one event in order to successfully play the music!).
The second problem has to do with multiple performers playing multiple polyphonic lines in a chamber setting. For chamber music to be performed successfully, individual players in the ensemble need to listen to one another so that they can stay together. However, if everyone in the ensemble is playing something different, their ability to stay together is compromised. As the complexity of each individual line increases, the ability for the ensemble to stay together diminishes. Of course, if a conductor is thrown into the mix this becomes a moot point, but I shudder every time I think of string quartets, piano trios, and other mixed chamber groups of four or fewer musicians that had to be conducted simply to keep them together!
Another issue that should be addressed has to do with textural variety. As mentioned above, music notation software makes it quite easy to audition and hear simultaneous contrapuntal lines in a way that one might not be able to hear otherwise. However, this very same playback stutters and falls apart when attempting to make a single MONOPHONIC line sound palatable. Nothing sounds WORSE in playback than one line, completely isolated and lacking any support from its fellow virtual instruments. However, that same monophonic line - as performed by a real, living musician - can sound absolutely breathtaking! The end result is that, more often than not, music composed in music notation software relies upon too many simultaneous lines of polyphony, with little to no textural variety in the form of monophonic, or even homophonic, sections of music.
So, here are a few tips that I would recommend to any composer who wishes to avoid many of these issues:
• Variety IS the spice of life. Make sure that your composition has plenty of room for monophonic lines and homophonic "tutti" sections, as well as areas of rich, dense polyphony.
• When composing for a single instrument that is CAPABLE of performing two or more polyphonic lines, take considerable time to check, double-check, and triple-check your counterpoint. Make sure that the lines are coordinated in a way that is still performable for your player. Remember - difficult is ok, but impossible is not.
• When composing for chamber music that is without conductor, include sections where you pair up performers with similar rhythmic activity (in homophony) so that no one performer is ever completely isolated from the rest of the ensemble (i.e. in a string quartet, pairing up the violins for one line (in harmony) while the viola and the cello perform a different line). Solos, polyphony, and independence should still be used; just remember to provide unison moments in your piece where, should individual players get lost, they can get themselves back on track.
• A single, monophonic line will never sound as good in MIDI playback as it will with a live performer. Utilize these solos, and if you can't stomach the playback of the line - don't listen to it!
That is all for now - I am currently going through a very busy period in my schedule (as you might have guessed based upon the tardiness of this posting), so for now I will be continuing on a biweekly schedule. Look for my next post in two weeks!
Thursday, September 17, 2009
Well, it seems that my regular weekly posts on "Sibelius Composition" has once again been hit by the tardiness bug. However, better late than never, right? This week, I would like to open up a discussion on the role that music notation software MAY have in increasing the performance difficulty of new compositions. I say "may," because in all honesty I'm not 100% sure that this is an issue that stems directly from the notation software itself.
Let's set the record straight: new compositions labeled "difficult" by performers is not new. We all know the story of Mozart's Clarinet Concerto, which was considered so excruciatingly difficult at the time it was composed that Mozart himself claimed that it was a "joke," to see just how far he could push the instrument. Today, it is the most widely performed concerto in clarinet literature. Throughout the 20th century, compositions of a wide variety of difficulty (ranging from "mildly challenging" to "WTF! FML! HOW DO I PLAY THIS??!?!?") have been created both by hand and within the computer, all with varying degrees of success and/or failure. I myself have often been accused of writing music that was "too difficult," but often the end result is still very satisfying for both the performer and the audience alike.
Some of the more overt reasons for this increase in difficulty have to do with the composer's desire to expand what is considered "possible" by musical instruments. This exploration of "extended techniques" (as they are often referred to) frequently leads to new compositions that, on the surface, seem to ask the impossible of the performer. Sometimes - they are impossible. Other times, however, what appears to be un-performable turns out to be quite workable after the performer spends considerable time practicing the new technique.
Of course, this issue of difficulty presented above has NOTHING to do with music notation software. In fact, on some levels the exploration of extended techniques has been minimized as a direct result of composing on the computer. The limitations of the software often make the prospect of developing (or even recreating) an extended technique daunting, due to the notational challenges inherent in asking the player to do something that isn't standard. This is a bad practice on the part of the composer. Composers should NEVER feel chained by the program, nor should the composer ever choose not to explore an extended technique simply because he or she can't figure out how to get the program to notate it. The decision making process must remain in the mind of the composer - not the software. (Incidentally, I am reminded at this point of many arguments that I have had in the past as to whether Finale or Sibelius can handle extended notation better. Suffice it to say, I strongly believe that BOTH programs can handle these notational challenges given enough patience, creativity, and Tylenol.)
But, I digress. Getting back to the topic at hand - the main area of difficulty that can be directly attributed to music notation software isn't one of timbre, but rather of RHYTHM. The bottom line is that it is incredibly easy - perhaps too easy - to create complex rhythms that, while completely performable by MIDI playback, are next to impossible to perform by a living musician. Complex syncopations, rhythmically intricate counterpoint, nested tuplets/quintuplets/septuplets/etc., and constantly shifting meters and metric patterns are all completely possible within music notation software. Sometimes, these rhythms occur completely by accident by the composer (usually due to the unintentional shifting of the musical material by one eighth or sixteenth note). Sometimes, they occur because the composer becomes attached to the vitality that these rhythms seem to present themselves when played PERFECTLY by the computer (with absolutely no tempo fluctuations whatsoever). Other times, these rhythms are EXACTLY what the composer wants, without any regard to the possible difficulty that such rhythms may present.
It IS arguable that many of these aforementioned techniques are in fact completely performable by those musicians who are used to counting these rhythms. In fact, I have been told on more than one occasion that the increase of these complex rhythms have actually contributed to the improvement of rhythmic understanding and virtuosity by a select group of outstanding musicians (often those who are new music specialists). As mentioned above with regards to timbre and the exploration of extended techniques, these are not unique to the world of music notation software. Many composers - Brian Ferneyhough for example - have been exploring extreme rhythmic languages completely outside of the world of music notation software.
Nonetheless, one cannot argue that the software does play its part, and that regardless of whether or not there ARE performers who can play this material, many cannot. So - what is the composer to do? If you view yourself as a "rhythmic pioneer" of sorts (such as Ferneyhough), you should change NOTHING. Keep writing complex rhythms, accept that the work is difficult, and seek out the best of the best to play your music (in many regards, I wish I could have that luxury!). If you are concerned about the level of difficulty your rhythms present, though, here are a few tips that may help you:
• Count out your rhythms as you write them. If you yourself have a difficult time accurately counting out your rhythm, it is quite possible that your performers will have that difficulty as well.
• When applicable, use standard terminology to assist in tempo changes, rather than using "written out" ritards and accelerandos. (i.e. Writing out rit. and accel. instead of using feathered beaming).
• Avoid combining multiple levels of rhythmic complexity (for example, layering in counterpoint two separate lines, both of which use nested tuplets AND syncopations).
• As mentioned in prior blog entries, avoid using "extreme" tempos - they won't sound nearly as good in practice as they do in the computer.
• Try not to add in "unnecessary" rhythmic variation. This is admittedly quite subjective, and will require a delicate touch on the composer's part to ensure that there is plenty of necessary rhythmic variation to keep the music interesting, and not a single note more!
• Remember that most performers will add in some "give and take" to their own rhythmic interpretation. Don't feel like you have to change up your rhythm because the computer playback's interpretation seems "stiff."
Of course, you may also choose to ignore all of the above and simply go for it! After all, the expansion of today's rhythmic language is part of what makes contemporary music exciting! Just...not necessarily easy.
Sunday, September 6, 2009
(note: This is not a researched article, but simply the observations of one composer trying to find his place in the world of new music.)
"We are all Robert Schumann." That is the thought that continues to bounce around my head as I type this blog entry. It was Schumann who first (or perhaps most famously) took on the double-life of composer and music journalist. In his journal, Die neue Zeitschrift für Musik, Schumann acclaimed those fellow composers whom he thought were worth his praise, and slaughtered those whom he believed were compositional hacks. While his actual tenure with the journal lasted for just a little over ten years, his reputation as a music promoter and critic would be known well after his death, and in some regards would even exceed his reputation as a composer.
Re-reading this short description of Schumann's journalism career, I can't help but wonder how his career might have fared differently had he lived not in his own time of musical virtuosos and private salons, but instead in our time of Twitter, YouTube, and Facebook. Would we have had an emerging-composer series featuring the unknown-at-the-time, but soon-to-be superstar Johannes Brahms? Would we have had tweets and status updates from "Shoe_Man" along the lines of "Disgusted by musical hacks - why aren't we hearing more Beethoven?" Would we have seen "fail" videos featuring Liszt and Wagner, edited in the most unflattering of ways?
Of course, Schumann doesn't live in our age. He doesn't need to.
Schumann's legacy is today more present then ever, upheld in the form of contemporary music blogs created and contributed to by today's generation of composers. These blogs cover all the bases when it comes to the world of new music. New music reviews, interviews of up-and-coming young composers, articles outlining the "state-of-our-art" (this one included), podcasts featuring the opinions of new music performers, and even parodies of other new music blogs are all readily available on the internet today.
The fact that so many of my generation of composers are turning to the internet as a vehicle for critiquing, discussing, and exposing the world of contemporary music makes me wonder: at what point did becoming a composer mean also becoming a promoter of new music? For what reason do composers choose to engage in this seemingly selfless act of "community promotion?" The answers to these questions are not clear at all - in fact, to try to answer either question without first conducting interviews and researching the motives of my fellow composers would be both inauthentic and negligent on my part! However, I can answer these questions with regards to my OWN perspective - why I choose to assist other composers through my blog (and through the Fresno New Music Festival,) as well as why I believe that more and more composers will continue to join this online community of artistic promotion and self-reflection.
A cynical reply would say that I pursue all of these activities because, in fact, they are indeed self-serving. One could say that I direct the Fresno New Music Festival simply to broaden my network of connections and grow my career as a composer. Likewise, one could say that I use my blog as a "mouthpiece" to the new music world, inevitably linking my name with the pedagogical approaches that I write about. Both of these replies are not without merit - I would be lying if I wasn't aware of the benefits that my composition career gains as a result of pursuing both of these activities. However, that would only a small fraction of the story. The fact is, the amount of time and work that is required to direct a festival, contribute to a blog, and maintain a full-time professorial gig (not to mention parent a two-and-a-half year old and compose!) is astounding, to say the least. If my goals were only self-serving, there are many other avenues available to me that offer both greater benefits, and take less effort on my part. As a composer, I know all too well that the creative endeavors that I pursue, I do not do for money or fame.
A better reply would require one to closely examine the culture of the new music community. The contemporary music world is incredibly small, and in the past has been more-or-less isolated from the "mainstream" of classical music (assuming that classical music HAS a mainstream). Every now and then, a single composer or new music performer breaks through the "parchment ceiling" and manages to become relatively well known in the classical community. This is unfortunately an all-too-rare accomplishment, and is usually associated with a large award or fellowship, as well as an orchestral premiere with an A-list orchestra. These conditions do not occur often, and for those of us who work in relative isolation (for example, Fresno) it is an almost impossible scenario.
I do not mean to sound overly pessimistic, but it is important to have a realistic outlook as to how our community has existed before we can start to examine why these recent online trends have exploded in the way that they have. If "necessity breeds innovation," then it is easy to see why the contemporary music community has embraced the blogosphere. This group of music pioneers is creating both awareness and opportunity, not just for the individuals who participate in it, but for all composers and contemporary music performers. They are using this great tool as a a way to shine a bright beacon on all of our artistic endeavors and accomplishments - both large and small. They are bringing greater awareness of our community to the rest of the mainstream public. More importantly, they remind us that we are all a part of this larger community, even when we feel isolated and removed from it because of location or circumstance.
Two blogs in particular - The New Music Box and Sequenza 21 - are doing an exceptional job at "shining this beacon." Both of these sites endeavor to highlight the accomplishments of all composers, as well as provide a proverbial seminar for composers to help communicate with each other in a way most of us have not been able to do so since our graduate student days. In addition, new websites are popping up all over the internet that are likewise devoted to organizing the "online new music community," such as the United Kingdom based site Dilettante. I am personally heartened by all of their efforts, and in turn am eager to provide my own contributions as I do so today.
Returning to the example of Robert Schumann, it is interesting to know that part of his own motivation for founding "Die neue Zeitschrift für Musik" (in English "The New Journal of Music") was also to shine a beacon on his "contemporary music community." Granted, he also wanted to use his journal as a way to lambast the compositions of those whom he deemed as substandard composers. Still, in the end his journal did champion the accomplishments of many of his contemporaries - Chopin, Berlioz, and Brahms to name a few. Schumann strongly believed in using the journal as a way to celebrate the works of his fellow composers, very much in the same way that we in the online new music community do so today.
I am constantly reminded of lessons in the past, where I was told that "no composer other than yourself would help elevate your career." I am happy that this particular lesson has turned out not to be so. Many composers do in fact want to help each other out, to see our community thrive, and together become recognized for the great art that we contribute to our society. Working together, we may be able to avoid seeing more articles on the "Invisible State of New Music," or on the failing classical world as a whole 50 years from today.
The days of composing in isolation are over. We can no longer afford to ignore each other as we toil away in our studios, disconnected from the rest of the world. We can no longer treat each other as "competition." We must engage with each other in the form of support AND critique, and in turn engage with the rest of the world. The internet community provides for us a unique and awesome vehicle for doing this. With continued effort, we all might one day be able to transcend the "parchment ceiling" and bring our entire community to the forefront of classical music.
"We are all Robert Schumann" now.
The following video is an interesting exercise as to how and when inspiration might strike. Credit must be given to my student Patrick for bringing this to my attention.
Truth be told, I can't tell whether I'm reminded more of Messiaen for his many bird-inspired compositions, or Debussy for his use of minor-minor 7th arpeggios. Maybe both?
Friday, September 4, 2009
Many of my previous posts have been operating on the assumption that when one composes directly into Sibelius or Finale, it is likely that the composer is using standard notation most of the time. After all, these programs are designed to facilitate the use of standard musical practices FIRST; anything which might be considered non-standard in practice is logically less of a priority for these programs, simply because of the fact that they are considered to be *non-standard*.
For example, all music notation software - including "lite" versions of software such as Finale Allegro and Sibelius First - easily allows for the placement of notes, standard simple and compound rhythms, expression markings, articulations, common time-signatures, and tempos with few problems whatsoever. However, if the composer wishes to create a musical gesture that is out-of-the-ordinary (for example, an aleatoric "box" where musical fragments are repeated out-of-time over a set number of seconds) the composer may need to wrestle with the program for quite a while before successfully creating the idea. Don't misunderstand me - these programs CAN handle all sorts of alternative notations with a surprising amount of flexibility. However, creating these often involves "breaking" the program on one level or another, and is almost always a time-intense exercise in patience, diligence, and comprehension of the program's reference manual. (Oh - and don't get me started on graphic scores - that is a whole other issue on its own!)
The problem that needs to be addressed is what to do when the composer chooses - either out of frustration or possibly even laziness - not to go through with their original non-standard idea simply because it is too hard to do. Composers that I have worked with in the past often refer to this as allowing the program to dictate your composition to you. This cannot be allowed. Regardless of how the composer chooses to notate an idea, the composer should be free to implement it in anyway they see fit - whether it is standard or not (of course, whether it is WISE to notate it one way or the other is a different issue altogether. Wow - that's my second aside in two paragraphs...).
A common solution is to write out your idea by hand first, so that when you are notating it into the computer you force yourself to recreate your hand notated idea. This method usually ends up with the composer succeeding in recreating their hand-written idea after many hours of consulting forums, technical support, and the help menu. However, since this blog is about composing directly into notation software, I would like to propose a couple other ways to deal with this particular issue should a composer choose to create their idea that way:
• First rule: DO NOT use playback here. Most alternative notations are used to create sounds and rhythms that can't be handled through standard notation. Unfortunately, neither Finale nor Sibelius are built to handle the playback of most alternative notations (with a few exception here and there) since that would require the program to be able to be taught how to interpret them. Using playback here can actually be harmful to the creative process, as repeated listening of an "incorrect playback" may color the composer's perception of what they've written, eventually leading the composer to change or even scrap what they have created.
• If you must insist on having an audio playback as a way to hear your progress, make an audio mock-up of your idea in the sequencing software of your choice.
• Get accustomed to "breaking" the program so that you are aware of all the different ways that Finale and Sibelius can handle alternative notation. In Finale, get to know the Special Tool box backwards and forwards. In Sibelius, get to know the Properties box, as it will be your best friend.
• Try creating a Schenkerian diagram in either program. This is a great way to teach yourself how to remove barlines and stems, change the size of noteheads, extend beams, and place "invisible" notes to enforce non-standard spacing.
• Experiment with percussion staves. They are programmed to handle multiple sounds and notations on a single instrument, and are great tools to use for graphic representations of sounds.
• For power users - make your own fonts! This way, you can simply bring in your newly created font and place the notes as easily as you would standard notes.
• When all else fails, get to know a graphics program. Both Finale and Sibelius allow the importation of graphics into your scores, allowing for all sorts of alternative notations free from the constraints of the program (Yes - this isn't actually composing into our notation software, but it IS still at the computer!).
There does seem to be a trend that, with each successive version of Sibelius, more and more alternative notations are becoming integrated into the software. Flutter-tonguing, for instance, is a technique that is now completely integrated, even switching patches automatically when called for. Quarter tones, jazz "scoops and falls," and other non-standard articulations are becoming quite common. This is a good trend, and I hope that over time more and more non-standard practices continue to be integrated into the software.
Monday, August 31, 2009
Thursday, August 27, 2009
One of the big issues that I tend to notice when listening to MIDI playback is that sampled notes sound "dead" to my ears. For example, while a single stroke on an acoustic piano is capable of generating a "glorious montage of harmonics echoing through space," a single stroke on a MIDI piano is, unfortunately, not nearly as satisfactory. True, while new sound generators have gotten awfully close to mimicking the sound of a real, honest piano - and convolution reverb has done wonders in making that same piano sound like it is in the largest of concert halls, to my ears even these new virtual pianos still lack a certain resonant quality that a real piano has.
I'll be the first to admit that our new virtual orchestras do a much better job in creating acoustic resonance than sound samplers from even as recent as five years ago. Back in the day, when all of my sounds were generated on an Alesis QS-6, I had to introduce a substantial amount of reverb simply so that I could remind myself that all of these generated sounds did occur in a space, and that my compositions needed to reflect that (no pun intended!). It was a crude, but effective trick, that ensured that as I composed I was conscious of the "space between the notes."
The problem that the computer composer needs to deal with, thus, isn't that the notes necessarily sound bad (because they don't), nor that they lack resonance (of a type), but that even with all of these advances in technology the sounds still don't adequately represent the "space between the notes." Our setting - the resonant space where the ensemble is supposed to be represented - simply doesn't sound "right." I touched briefly upon this issue back in my post "Baby Got Playback, (June 2009)" but I feel like this is a point worth revisiting , if nothing else but to press upon what problems occur as a result of this issue, and how we as composers can be conscious of it.
Without adequate representation of this resonance, we end up with digital silence in between our notes. Digital silence is an absolute absence of sound, something which is impossible in an acoustic setting, but occurs quite often in a computerized one. Normally, even in the quietist of rooms, faint hums, whirrs of fans, whispers, and other ambient sounds are present. Not in a computer.
The main problem with this digital silence that occurs is that the composer thus feels the need to "fill the void." Much like how most of us try to fill awkward silences with nonsensical conversation, the composer is compelled to try to fill up empty regions in the music with more and more "attack" points. That's right - not notes, but attacks. The difference is crucial. A note can be both long and short; however, the sustained long note often falls victim to the same problem that silences do: not enough resonance to fill the space. An attack on the other hand (as taken from MIDI terminology for ADSR, or attack, decay, sustain, release), is the point at which a NEW note is introduced. These attacks, often in the form of 8th or 16th notes, are created in between both types of resonant gaps - the ones created by silence, and the ones created by sustained pitches. The end result is that we have new notes introduced consistently without pause, often overwhelming the texture. Sometimes, this is a good thing. After all, I as much as anyone enjoy composing intense, pulsing 8th note rhythms that permeate the entire texture. However, we must be aware that we are doing this out of an aesthetic choice, rather than simply trying to "fill the gaps."
The other common problem that occurs, as I mentioned in my previous entry, is that the composer will often increase the tempo of the composition to help fill these gaps. While this may increase the excitement potential of the piece (and sound great in the computer), it often leads to very muddy and jumbled live performance, particularly when combined with a large concert hall. I personally stumble over this issue myself, and I have to constantly knock my tempos down to remind myself that they will sound fast enough on the concert stage.
Simply being aware that the space between the notes isn't accurate is the first step in learning how to deal with this issue. Experience with live performance helps too. Short of both of these, though, here are some other steps that a student composer can implement to help train the ear:
• Consciously add SPACE to your compositions. It is always better to err on the side of having a rest too long, then not long enough.
• Scan your pages for "white" notes. Even in sections of music that intentionally feature driving 8th and 16th note patterns, don't neglect sustained pitches and pedal tones. They will go far in holding your composition together, like glue.
• Work on composing slow music. Composing in MIDI is an ideal medium for fast compositions; slow compositions on the other hand often sound stilted and unsatisfactory in MIDI playback. Trust your own instincts.
• Remember that one note is often enough.
• As mentioned in the past, keep your tempos a notch below what you think sounds "fast enough." You'll find that your actual performance sounds more than fast enough.
As always, I'm eager to hear how those of you reading approach this issue, or if you really think its as much of a problem as I do!
On a different note: instead of using this blog as a forum for my new residency with the Heretic Opera, I will instead be contributing to the Heretic Opera's blog. I will post here one more time when that officially begins.
Have a great week!
Monday, August 24, 2009
You might be wondering why it is called a "remote" residency. Since I won't actually be residing with the opera company (located in Portland, OR - a shame that I can't actually live there for the year!), I will instead be writing about my experience writing an opera here on this blog, as well as posting video logs and interviews over the course of the creative experience. Think of it as a "behind-the-scenes" look into writing a contemporary opera.
Anyways, I wanted to share the news here first! You can find out more about the Heretic Opera on their website. I'm personally very excited to be participating in this project.
Since I will now be posting multiple topics here, I will also begin labeling my posts for your reading pleasure. My weekly composition tips log will be up as advertised on Thursday, so check back then. Until next time!
Thursday, August 20, 2009
Which is why this post is late. In fact, I'm afraid that my normal weekly posts may be sidelined slightly while I get re-situated into a normal schedule. A couple of changes:
1. I'll be posting on Thursdays now instead of Wednesdays, to better fit my teaching schedule.
2. I'll be branching out a bit, discussing not just composition tips in Finale and Sibelius, but other topics as they relate to the music industry and composing.
3. Now with 25% more irreverent non-sequiturs!
These changes will be taking place next week. Today - I have classes to prepare, meetings to go to, and documents to burn. Fun!
In the meantime, you may want to check out Sequenza 21 (if you haven't done so already), a great blog specifically dedicated to contemporary music and the composition community. Enjoy!
See you all next week!
Wednesday, August 12, 2009
Rather than return to my normal, tried-and-true method of part extraction (the same method that I've used since Finale 95), I chose to shake things up a bit with this new orchestra piece that I am wrapping up. I dove into the brave new world of linked parts. And you know what - they worked FAR better than I could have imagined. Granted, I had to relearn quite a few things, including not only how to set up preferences for the linked parts, but also what adjustments I had to make to my actual score so that the two "beasts" played nice with one another (Sibelius' ability to create "blank pages," as opposed to "music pages," suddenly became a huge factor in how this all played out).
The end result is that all of my parts are, well, perfect. Very little adjusting needed. Cues are all nice and embedded into my score (thanks to the "Paste as Cue" command - another great feature!), and show up perfectly in my parts. All in all, a great first experience.
I just thought I would share this with anyone who has yet to try linked parts in Sibelius. Fear not this strange new world - go for it!
Tuesday, August 11, 2009
Take enharmonic spellings for instance. It feels like I have gone over my work at least six or seven times up to this point, (although in reality it is more like one or two times...it just feels like more...). Yet, as I continue to scan the score, I continue to catch incorrect spellings of notes. These include: flats where they are supposed to be sharps, augmented seconds and diminished thirds, multiple instances of missing accidentals or accidentals where there should be none, etc., etc. It makes me wonder why it is that these enharmonic spellings, a relatively benign problem writing by hand, has become such an issue when using music notation software.
Of course, I do have a hypothesis as to why this is. In my opinion (and I stress - this is my opinion, not fact!), the relatively quick process of entering notes into the computer allows for the user to bypass minute parts of his or her own creative process. This includes the small, almost inconsequential decision of choosing one enharmonic spelling over another. When writing by hand, one has to methodically choose whether the note will be one spelling or another - after all, the note can't be placed in two places at once on the staff! This doesn't necessarily mean that the right enharmonic spelling will be chosen by the composer, but simply that the composer IS still in control over that choice.
I'll be honest: many times, in my own compositional process, I too fall into this pitfall. I input notes into Sibelius using a keyboard, which as a result completely bypasses this step of choosing an enharmonic spelling. The keyboard doesn't know which enharmonic spelling to use; in fact, all that the keyboard is doing is sending a MIDI note command to the software, which then does its best to interpret whether or not the note is a flat or a sharp. This decision is often completely arbitrary (although depending on the situation, one will be favored over the other), quite frequently leading to incorrect spellings. After the note is input, I am often on to the next note - choosing to leave the spelling aside as an "unanswered question," one that will inevitably have to be answered through the editing process. The problem in this case is that I never made a choice as to which enharmonic spelling I would prefer, and instead let the computer choose for me.
Both Finale and Sibelius have settings to assist with this process, but they are not 100% foolproof. In particular, I find that these programs are most susceptible to incorrect spellings when the composer chooses to do one of the following:
a. when using a key signature, modulating to an unrelated key
b. working in a remote key (i.e. six flats/sharps or more), particularly with a transposing instrument involved
c. working with a symmetrical scale (whole tone, octatonic, etc.)
d. working in an open key (no key signature)
The last one is where the majority of problems seem to arrive for both myself and my students. Without a key signature to check against, the program is literally guessing as to which enharmonic spelling is best in any given situation. The truth is, even when writing by hand figuring out the best enharmonic spelling is often not an easy task. There are many instances - particularly when composing using a non-tonal language - where the best choice is far from apparent. The example of the whole-tone scale is a classic case of this, where it is impossible to spell the scale without either using a diminished third between adjacent pitches, or a diminished octave across the entire scale.
Nonetheless, if nothing else it is the composers duty to ensure that all enharmonic spellings in the work are chosen by the COMPOSER and not by the computer. This process can be saved for editing, as I often choose to do so. However, if you want to find the best spelling the first time, here are a few tips to assist in this process:
1. Always be aware of the scale that you are using (if you are using one), and make sure that the notes you are choosing fall into that scale. If the note is a non-chord tone, then be able to rationalize its spelling as such.
2. When using an asymmetrical scale, be consistent with your spelling. In the case of a whole-tone scale, choose the same place within the scale for your diminished third to appear, and make it a location that occurs infrequently (for example, if you are alternating between two steps in the scale, that probably isn't your best choice of a location!).
3. When composing atonally, choose intervals that are the easiest for the performer to read. Half-steps, whole-steps, and common intervals are always easier than augmented and diminished intervals other than the tritone.
4. Prioritize intervalic spelling within one instrument over vertical spellings within the ensemble - after all, an individual performer is not particularly interested in the spellings of his or her fellow musician.
5. Play or sing the lines as you compose them, considering whether or not the spellings feel right to you.
I would love to read up on how the rest of you approach enharmonic spellings, particularly when working in an open key or when using an asymmetrical scale. Until then, though, I really should return to my editing. After all, I have a bunch of A-sharps in my piece now that really should be B-flats.
Monday, August 10, 2009
Ok, so this has absolutely NOTHING to do with this blog, but I thought I would share it anyways. I created this earlier today as a way to constructively release some pent-up frustration, mostly related to parenthood.
(Although it DOES refer to midi output commands, so I suppose that is the connection.) Enjoy!
Wednesday, August 5, 2009
I was surprised NOT by the fact that he brought me this sketch, but that this particular student - a very tech savvy, computer geek in his own right - chose to sketch this work by hand.
Much of what this blog is about is on how to compose within the parameters of notation software. However, that doesn't mean that every task is best suited in the computer, and that pen and paper should be completely abandoned. Pre-composition - the creative process that each composer engages to help discover and organize their musical ideas PRIOR to the actual writing of notes - I believe is one of those tasks.
Now, before I receive 20 comments on how it is completely practical and legitimate to "pre-compose" on the computer, I want to stress that while I believe that pre-composition is best suited away from the computer, I also recognize that this is a very personal process. Each and every composer will approach this from a different perspective. Also, I should also stress that I have tried to sketch my pieces in Finale and Sibelius in the past, but in the end I always seem to eventually need pencil and paper to get my basic ideas down in a satisfactory manner.
So, why do I believe that pre-composition should be done on paper? For me, it comes down to immediacy and convenience. There may be hundreds of ways that I might choose to jot my ideas down - from simple words, to graphic imagery, use of a timeline, notated ideas, literary reference - the list goes on. While there are likely ways to incorporate all of these approaches into notation software, these programs really aren't meant to handle tasks like these efficiently. Similarly, I might be able to use other programs to assist in this process (such as typing ideas into a word processing program, or creating graphics in Photoshop or Freehand) but in the end this is a cumbersome and limiting approach for me, not to mention considerably slower than simply writing words and images on paper.
Still, despite this, there is part of me that WANTS to use my computer for pre-composition. Despite many failed attempts, I often will still turn to the computer at the beginning of my creative process. I understand that, for me, writing down my initial ideas by hand is my preferred method TODAY, but I would love to be able to discover a process that is just as immediate and convenient on my computer. I want to be able to have the same liberating feeling that my student had just this past week, only when sketching with a mouse and keyboard. I simply haven't discovered what this is - yet.
So, I am opening this discussion up to all of you who are reading this. How do you approach pre-composition? Do you sketch by hand, or have you found a method that works for you on the computer? Let me know, and I will likely try it out myself when I start my next piece.
Friday, July 31, 2009
Anyways, let me know if you like or dislike the new look. Thank you!
Wednesday, July 29, 2009
I imagine that when many of us compose, we occasionally find ourself "in the zone," so to speak. In this state, the notes seem to effortlessly spill out of our mind, onto the computer screen, at a pace far faster than normal (whatever that pace may be...see last week's blog for more on this). I know that when this occurs to me, I end up feeling compelled to do whatever it takes to get the notes on the page - such as skipping past all of the "little things" to ensure that the notes are all there.
The little things in question, though, are often not so little.
These little things that I am referring to are details such as dynamics, articulations, phrase markings, expression markings, and tempos. In the heat of the moment, it is often easy to overlook these - after all, they simply aren't as sexy as the notes themselves. I know that in my own experience composing, I will occasionally discover that I've composed an entire phrase or two without placing these details, simply because I was overeager to get the notes themselves on the page. Unfortunately, I have found that this often ends up a compositionally fatal practice, leading to needless extra editing - or worse - complete rewrites. The notes that seemed to come out so effortlessly ended up lacking far more than the details; they also lacked musicality.
Music is, to put it bluntly, far more than notes. Music has expression. Music has character. The same string of notes and rhythms can be performed countless ways, in a variety of styles, moods, and interpretations. These qualities all depend on two things: one, the markings the composer chooses to provide, and two, how the performer chooses to interpret those markings.
When composing into Finale or Sibelius, the expediency and immediacy of note entry sometimes leads the composer to prioritize notes over the musical phrase. This shouldn't come as a surprise, as note entry is the first priority of the programs as well. Both Finale and Sibelius have a large variety of ways to input notes, ranging from point-and-click entry, to keyboard entry in real time, as well as several methods that fall somewhere in-between. The programs are designed to make note entry as easy as possible. This isn't a complaint - far from it, in fact - but rather simply a fact of how these program operate.
An unintended side effect of this is that the remaining details become "second-class citizens" within the program. In Finale, all of these details require the composer to access a separate tool (smart shapes, measure expressions, articulations, etc.) in order to point-and-click the detail into the score. In Sibelius, many of these details are handled by secondary numpad panels, or by typing them in using Command-E (for expressions) or Command-T (for techniques). Slurs and hairpins are likewise handled using their own hot keys. These are not difficult tasks, nor are they time consuming. However, they are all SECONDARY functions in relation to the prime function of note entry. This inevitably leads to the creation of compositions that can best be described as "notes, with a sprinkling of music here and there." This is especially evident in early student compositions, often when the student is already consumed with the creation and development of notes in the first place (with or without notation software!).
For me, I find that in those moments when I unintentionally focus entirely on the notes, I also unintentionally fail to understand the larger emotional quality of my music. Sure - I might have a "general concept" of whether the notes are fast, slow, loud, or soft - but I am often not considering whether those same notes are light, dark, violent, timid, lovely, or sorrowful. Notes alone cannot convey this, and without expressions, articulations, or dynamics to guide me I often have to formulate what those notes are supposed to be well after the creative moment has passed. This, as you might guess, can often lead to less-than-satisfactory results.
So, while it may not be the most expedient method of composing, I do recommend to my students (and anyone else who might ask!) that they input as many of the details as the music demands at the same time as note entry, rather than after the fact. Doing so helps the young composer learn that there is in fact a HUGE difference between a single whole note - naked, and a whole note that "crescendos over the entire length of the bar, from pianissimo to fortissimo, underneath the descriptive marking grotesque." One is a note - the other, for better or worse, is music.
(A brief aside)
To further press my point, allow me to present a portion of my blog entry now "without the details":
when composing into finale or sibelius the expediency and immediacy of note entry sometimes leads the composer to prioritize notes over the musical phrase this shouldnt come as a surprise as note entry is the first priority of the programs as well both finale and sibelius have a large variety of ways to input notes ranging from point and click entry to keyboard entry in real time as well as several methods that fall somewhere in between the programs are designed to make note entry as easy as possible this isnt a complaint far from it in fact but rather simply a fact of how these program operate
Not a pretty sight, eh?
As a final comment, I must give Sibelius a tremendous "high five" for introducing Magnetic Layout in Sibelius 6, a feature which I already have drooled over last week, and one which I must once again drool over this week. This simple-yet-brilliant feature finally allows the composer to focus completely on detail entry WITHOUT having to worry about collisions and score layout. Now, there are no more excuses to enter the details with the notes, as doing so is now just about as immediate and expedient.
So, now I turn this over to all of you reading. When do you find that you enter your details and markings? At the same time as the notes? After the fact?
Until next week then - that is, as long as I don't get called back to jury duty. After all, Fresno also has a federal court.
Wednesday, July 22, 2009
Since last week's post, "The Big Picture," I have found myself preoccupied. You see, that post stirred up a great amount of discussion related to several topics that I didn't anticipate. These included discussions on view mode preference (of which I see I am in the minority based upon my poll), the use of "Staff Sets," and perhaps most surprisingly, a request from a Sibelius engineer as to what additional features could be implemented to help manage "complexity" with composing and arranging. Upon reflection, though, it seemed in my mind that these all related to a single issue - how we manage our own personal work flow.
A large reason I became preoccupied with this topic was because it became very clear though the discussion that there are many features in Sibelius that I simply am not taking advantage of - features that, after experimenting with them for a bit, clearly do assist the composer in one form or another. Many of these features seem focused in two categories - those that provide the user with an increase in speed, and those that help with the "management of complexity." Several, in fact, do both (Sibelius 6's new magnetic layout comes to mind - a godsend feature if I do say so myself).
It made me wonder whether or not I was consciously choosing not to use newer features - if there was a reason that I choose to stick to older, possibly less efficient approaches. For example, I look at features like Sibelius' "Ideas" and think to myself - "Wow! That is SO cool! I'll definitely use that in my next piece!" Yet, to this date I have not taken advantage of it. I can surely see how the feature would assist my creative process, but something prevents me from using it. Is it simply old habits? Or is it something deeper?
After much thought, I've come to the conclusion that I stick to my more old-fashioned approaches not out of laziness, ignorance, or some sense of loyalty to old practices, but rather, because they allow my work flow to match my own creative goals. My work flow has always been about trying to emulate the "pencil and paper" experience on the computer. My goals have never been about working faster or having the computer manage complex ideas for me. It has always been far more important for me to find a work flow that allows me to remain conscious of all tasks that I do, so that in the end writing on the computer was as personal an experience as writing by hand.
In some odd way I actually prefer a slower approach when working in Sibelius. Using the program in such a way forces me to focus on the individual pitches that I compose, rather than inadvertently allowing me to throw too many notes on the page at once (a technique that I affectionately refer to as "vomiting notes"). Additionally, viewing the score in "page view" rather than scroll view or panorama seems essential to my own creative process, since it allow me to see the WHOLE piece as it will look in the end - including empty measures, abbreviated instrument names, page numbers, margins - everything. This is, admittedly, a much slower process than using Panorama/Scroll view, or using Score Sets to see only the staves you need. However, for me, it fits my own compulsive need to see everything at all times.
Of course, there are several newer features that I do enjoy using with great regularity. As mentioned above, Sibelius' "magnetic layout" is, in my opinion, the greatest thing since sliced bread - especially for a composer like me that places dynamics and other markings as I write the notes! I also DO believe that the Ideas panel would fit me in the right situation - I simply haven't really sat down and tried to use it....yet. So, obviously I'm not against the idea of saving time - particularly when it involves something as tedious as fine tuning the score layout.
The fundamental question that I would like to ask everyone thus isn't how one approaches his or her own individualized work flow, but WHY one does. What reasons do you make the work flow choices you make? Is speed a priority? Efficiency? Organization? Or, like me, do you intentionally try to slow yourself down, so as to focus on individual notes? Please share your thoughts!
Thursday, July 16, 2009
Wednesday, July 15, 2009
In all honesty, though, this isn't a problem that solely exists in notation software. I was first introduced to this compositional issue as a student myself, when one of my own composition professors encouraged me to compose (by hand) on "very, very large paper." I really didn't understand what my teacher was trying to do at the time - after all, how would larger paper improve my writing??? However, when I tried it out it became apparent how much easier it was for me to put my composition into context with itself. I could see exactly what I had written earlier, compared to what I was currently working on. In essence, I was seeing "the forest from the trees." (This may all seem fairly obvious, but until you actually try it out you don't realize what you are missing!!!)
In music notation software, this is a somewhat harder problem to overcome. Without spending a ridiculous amount of money on a "very, very large monitor," the digital composer is forced to work within the relative confines of the screen that he or she is working on. Tunnel vision is almost unavoidable, since the software makes it impossible to see anything other than what the composer is currently working on (unless someone knows of a way to do split-screen in Finale or Sibelius??? Or perhaps a Finale or Sibelius developer could add that as a new feature in the next version?). This problem is compounded when working on a score for a large ensemble, since inevitably the composer will only be able to see one-half of the ensemble on the screen at any given time - not an ideal way to compose!
It may seem pretty obvious, but the very first thing one should do in this situation is to PRINT their score out as they work on it. Having a hard copy to refer to is essential, as it allows the composer to always have onscreen what he or she is currently composing, rather than constantly scrolling back and forth within your digital score. A hard copy can also be marked up, making editing a much easier process later on (after all, as I often threaten my students who don't bring in hard copies of their score: permanent marker + monitor = new monitor).
If you are interested in seeing beyond the individual pages of the score, here are a couple of other tricks to try:
• Post your print-out onto large sheets of cardboard (34"x22") so you can see up to 8 pages at once.
• Vary your screen view as you compose - work in different magnifications, and in different view modes.
• Make a PDF version of your working score so that you can refer to it in a separate screen window.
And, if you have some money to burn - buy a second monitor to mount your PDF file, side by side with your music notation software.
It should be mentioned that I am a BIG fan of working off of PDFs. They are much more eco-friendly than hard copy print outs, and are surprisingly easy to read from. Of course, you can't physically write on the PDF, so a hard copy print out will still be needed at some point in the creative process (especially if you have a sarcastic composition teacher that likes to threaten his student's laptop monitors with a red Sharpie). If you don't have a way to print PDFs, I highly suggest acquiring a way to do so.
How about you? What ways have you come up with to overcome "Notation Software Tunnel Vision?" Please share your thoughts!
Wednesday, July 8, 2009
In the meantime, I would be more than happy to take suggestions from the seven of you who read this blog as to what signature I should add to my weekly posts ("Happy composing!", which I was just about to write here, just seemed very cheesy to me all of a sudden...). The winner will be given virtual cookies and milk. :)
Wednesday, July 1, 2009
Tuesday, June 23, 2009
Here is a question to all composers who are reading this blog: How many times have you had worked out your piece using Finale or Sibelius playback so that it sounds "just right," only to be horribly disappointed when you actually heard the piece performed by live musicians?
Conversely, for all of you performers out there reading: How often have you received a newly composed piece of music that you dubbed - for a lack of musicality, phrase, or physical practicality - "Finale" music?
While there are many reasons why these two problems exist, I would wager that one possible culprit for both of these cases is in that often misused tool known as MIDI playback. As many of us probably already know, MIDI playback is, at its worst, a crutch for the inexperienced composer. It allows the composer to gain a false sense of what their piece sounds like, and often ignores the little details, such as musicality and human interpretation. In inexperienced hands, MIDI playback can create a litany of problems for both composers and performers alike. However, MIDI playback can also be used effectively as a tool for the skilled composer.
It cannot be denied that the sound of MIDI playback has improved considerably over the past few years, in particular with software samplers like the Garitan Personal Orchestra and Kontakt. The bottom line is that, despite the many advances in sound sampling technology, virtual instruments allocated in Sibelius and Finale simply do NOT sound like their live counterparts. Sure -they are likely real recorded sound samples of real instruments. They may have a feature known as key switching, to allow for a variety of articulations. They may even have a "human playback" feature, which allows for small amounts of give-and-take to be placed into the playback itself. However, despite all of these advances, virtual instruments are - and will remain - approximations of real instruments.
That isn't to say that these instruments don't sound good. Oddly enough - the problem might be that, today, these virtual instruments sound too good. Not all that long ago, it was pretty common knowledge that MIDI instruments sounded quite awful. No one would ever mistake a wavetable saxophone played over a General MIDI soundcard for a real saxophone. However, over the past five years this has changed considerably. Virtual orchestras, bands, and choirs - as well as "hybrid" ensembles (mostly virtual instruments with a few live musicians for color) - now make up the majority of film music and video game scores. This is the case for two reasons - one, because they have a "perfect" sound that live musicians are incapable of creating (nor do we want them to), and two, these artificial ensembles are considerably cheaper than hiring a live orchestra. Virtual ensembles are capable of playing anything that the composer throws at them, regardless of little things like balance, range, or even proper orchestration. On top of this, it has been documented that today's young adults -the coveted market of all movie producers - actual prefer the sound of the virtual orchestra over the live one, most likely because that is the sound that they are used to. Imagine the difficulty then, if those same young adults are now aiming to become composers - to compose for live musicians.
Another problem that needs to be addressed is that, by using MIDI playback, young composers are not audiating their own music. They are not hearing the music in their head as they write it, and instead are letting the computer do the job for them. A composer should be able to look at a score and, in thier head, be able to hear the piece - or at least a close approximation of what it should sound like. This is a learned skill - one that MIDI playback unfortunately holds back from developing properly.So, the problem for the young composer boils down to one, unavoidable fact: using MIDI playback hinders the development of the composer's ear. The traditional way to overcome this problem for the young composer is to both train them in the ways of proper instrumentation and orchestration, and to encourage them to LISTEN to as much live music as possible, particularly music performed in the concert hall (moreso than recordings, as they are doctored as well to acheive a more-or-less perfect sound). Having composers work directly with other student performers while working on thier piece is another time-tested and practical way of training the ear of the student composer.
Beyond this, though, what does one do about Playback? After all - it cannot be denied that it is a very convienent tool, despite the many hang-ups that come with it. I myself use playback quite a bit - although I must also mention that my own ear is quite trained and well developed. I find it is a great tool for assessing the pace of my music, as it helps me determine whether or not I need more music in a section, or perhaps if a section needs to be cut down a bit. I also find that I unconsciously accept many flaws and inaccuracies in my playback, simply because I have a good working understanding of how the live performance of the piece will sound. My playbacks often sound completely wrong - something which I am not only ok with, but find necessary as a way of ensuring that I am keeping my ear grounded, so to speak. While not always the case, a good composition will often have a poor sounding playback.
So, what to do? My advice would be the following: if you are just starting as a composer, DO NOT USE MIDI PLAYBACK. It's that simple. I'm not saying that it should never be used, but when one is first learning to compose the negatives simply outweigh any positives that might come from the tool. The first priority for the student composer is to develop their ear. However, as the ear develops, playback can - and should - be introduced in small amounts. Once ready, here are a few things that a young composer can try when returning to MIDI playback:
• Try using playback with only one voice, while listening to the remaining voices in your head as you compose.
• Use only piano sounds in your playback, while keeping in your ear the actual sound of the instrument you are writing for.
• Insert extra silence into your playback. Remember that a single whole note, performed by the best musician, will still sound incredible - even if it sounds flat and stale as performed by the computer (an issue of resonance that will be discussed at a later time).
• Take your tempos down a notch or two, until they feel "just a touch too slow." This is the correct tempo (this is also an issue of resonance).
Think of playback as an advanced power tool, like a circle saw. In the hands of an accomplished craftsman, he or she can use the tool to assist them in creating a great work, but in the hands of a novice, her or she could cut their own hand off. Develop the ear first, and then you will find that playback functions not so much as a crutch, but as a tool.