#rpm12 day 4: Where the line between instrument and composition is blurred

A quandary: What do you do when you’re not sure if you’re really creating music yourself, or just manipulating an interactive composition by someone else?

In the past, the only place this situation would likely have occurred is at an art installation in a museum, but what happens when that art installation is in your pocket?

I suppose in some ways the situation is akin to sampling. Or is it?

The key question all of the above leads to is this: What exactly is Bloom, the iPhone app by Brian Eno and Peter Chilvers? Besides an app, I mean.

Is it an instrument? Is it an interactive composition? Is it a piece of art?

And, after you’ve found a passable answer to each of these questions, another: If you record the output of an app like Bloom, who is the composer? Can you release it on an album? Do Brian Eno and Peter Chilvers receive composer credits?

The questions have both philosophical and practical implications. On one hand, you fall down the rabbit hole of the eternal, unanswerable question: What is art? On the other, more mundane, tangible concerns: If I put this on my album, is it copyright infringement? Or, at least, is it dishonest to take credit for the composition?

I don’t have answers to any of these questions, but I’m pondering them a lot this morning, because a portion of my music-making activities last night was devoted to the creation of a piece of music using Bloom.

To further complicate matters, consider this: Bloom consists of a simple touch interface that controls a set of predefined algorithms within the app: tones, scales, repetition and delay. Those four parameters (and probably more; I’m just going by what I perceive happening within the app as I use it) were defined by Eno and Chilvers, with some options being configurable by the user.

The user “plays” this “instrument” by tapping in different places on the screen at different times. But the app also generates tones on its own. There’s a background wash of sound that the user does not directly control, and if the app is left alone long enough, it will randomly begin “playing” notes itself.

It seems clear to me that if you just start up the app and let it go without touching the screen, you’re not really composing anything. (Or are you, John Cage?) If you start tapping the screen, you are now “playing” the “instrument.” But since so much of how the app works was defined (with plenty of built-in randomness) by the developers of the app, how much of the sound produced is really your composition, and how much is theirs?

The more you add, the more I think you are the composer. What if you overlay multiple tracks of Bloom within your DAW? That’s what I did: I ended up recording three separate passes of Bloom: one panned hard left, one panned hard right, and the other in the center. (In an interesting twist on the question at the beginning of the previous paragraph, on the center channel I didn’t actually tap the screen at all until about 20 seconds before the end of the piece.) What if you take it one step further and make Bloom merely one track in a multi-track recording featuring other instruments?

With a musical tool like Bloom, there’s no clear line to be drawn between instrument and composition, between app developer and composer and performer. And maybe that’s a good thing, philosophically. But I’m always nagged by those practical concerns: Can I really call the recording of my Bloom performance my own? It seems to me that if there’s a line anywhere, it can most clearly be drawn at the idea of layering multiple tracks of Bloom, or of using Bloom along with other instruments, because it is at that point where the resulting sound is not something that could have been created by Bloom itself.

Let’s take this all one step further: Reflecting on all of the concerns I have above, the only aspect of the discussion that troubles, rather than inspires, is legal. Am I infringing copyright if I record myself playing Bloom and put it on my album? Why is that even a question? Copyright is broken. The fact is, there are very few original ideas, especially in music. Everything is borrowed. The cognitive dissonance that arises when we try to suppress that free exchange of ideas, which is an inherent part of human expression, can be paralyzing.

Or, you can just not worry about it. I’m trying.

#rpm12 day 3: A certain sameness

No profound personal reflection today, just some mundane observations on my efforts last night to continue exploring new territories in iPhone-based music making.

Not so much exploration. That’s the failure. Last night I did precisely what I had been trying to avoid: I went back and spent almost all of my time tinkering with and perfecting the piece of music I had made the night before, instead of setting it aside and cranking out something new.

All is not lost, as I did start working on a new piece of music during a brief break yesterday afternoon, which will be entirely composed and arranged using the Xenon app.

But my goal of making music that is more experimental is getting a bit off track. I feel like tonight I have to record an album’s worth of extended free-form improvisations as penance. Yes, that’s what I’ll do.

The good news is, the track I tinkered with sounds great!

#rpm12 day 2: Does the world really need more music?

One of my tentative song titles for this year’s RPM album poses a question, in humorous song title parenthetical form:

(Does the World Really Need) More Music (?)

I wondered that again as I awoke this morning with Death Cab for Cutie’s “Codes and Keys” in my head. It’s the title track from an album they released last year. It’s a pretty good album. Every time I listen to it I think, “This is pretty good. I should listen to it more often.” But then I rarely do, because there’s just so much really good music being produced these days.

Do I really need to toss my little CD onto the already massive mountain of music (not the most poetic alliteration ever) being produced every year?

Well, that’s not really why we make music, is it?

I want my music to be heard. I want it to be enjoyed by others. But mostly I want it for myself. I have an urge to create that comes from a place I don’t completely understand. But yet I do it. I must do it. Because that’s what I do.

My music isn’t the expression of a troubled soul. I’m not bearing my heart with the music I create. I just have sounds in my head and I need to get them out.

But the creative drive goes deeper. It’s not the most satisfying realization, but I’ve come to learn that on some level, I just need there to be things in the world that I’ve made. In the words of Steve Jobs, I want to make a “dent in the universe.” Existence is so incomprehensibly vast, and we are such an infinitesimal part of it. But, for a few dozen of the Earth’s trips around the sun, we’re a part of it. And, what’s more, we know it.

I guess that drive to simply leave a mark before I’m a pile of dust is the driving force behind a lot of the creative impulse, at least for me.

I used to think that this creative impulse was at least partly tied to an instinct for procreation, that bringing new life into the world was what I really felt compelled to do, for simple biological reasons. But I have kids now, and while they’re great in many ways, it hasn’t lessened that urge to create art.

So, I continue to make music. I explore. I refine. I grow. And I keep trying to get it all out of my head and into the world.

This is not at all where I had intended to go with this post. I was going to just talk about the song I worked on last night, which ended up sounding a little bit like a Trent Reznor/Atticus Ross soundtrack… if all of their soundtrack tunes were 12 minutes long and ended with an extended, unaccompanied theremin solo. But that’s probably not as interesting as probing metaphysical reflection.

The short version of the daily progress report is, last night was another productive session, and I extensively employed two new apps I just discovered last night through the App Store’s often questionable “Genius” tool: Alchemy and SoundPrism. The latter gets an endorsement from Jordan Rudess, which is good enough for me.

#rpm12 day 1: So far, so… good?

True to the spirit of RPM (I guess), I got things started last night at midnight. I think I succeeded in establishing my process for this year’s challenge: I recorded one complete piece of music, and now I am planning to set it aside and move on, recording another tonight.

On most of my albums, as soon as I finish recording a track, I begin fiddling endlessly with the mix and master, and I’m usually even already starting to nail down track sequence and titles for the final album.

This time I’m trying to exercise restraint. I did make a rough mixdown to listen to in iTunes, but I will leave all matters of final track selection, sequence, titles, and even mixing and mastering until I’ve recorded EVERYTHING and have a chance to step back and see how it all fits together.

The piece I recorded last night consists of 7 layers of increasingly chaotic Animoog synth tones, with a minimal, processed FunkBox beat bolted on to (barely) hold the proceedings together. It starts off deceptively serene, then quickly veers off into chaos, while still managing to be fairly listenable. I’m not, after all, making Metal Machine Music here. (Yes, Lou Reed’s 1975 is-it-a-joke-or-not album of industrial electronic minimalist noise is one of my favorite musical punching bags.)

So, I consider day 1 (or, more accurately, night 0) a success. I think.

One problem I have yet to resolve: most of the apps I’m recording with have stereo output, but the technique I’m using to capture sound from the iPhone into GarageBand on my Mac is mono. I’m using a 1/4-inch (mono) guitar patch cable, plugged into an 1/8-inch adapter, plugged into my iPhone’s headphone jack. The other end of the patch cable is plugged into my Behringer Guitar Link USB interface. It captures the sound well, but… mono. It’s not so bad with a piece of music like what I worked on last night, where I’m layering multiple tracks in GarageBand (so having mono input is slightly preferrable), but it’s not going to work for everything. I know some of these apps allow you to record directly within the app, so in some cases I might do that, and then just drop the resulting WAV files into GarageBand for further editing.

#rpm12 day 0: The plan

We’re on the cusp of yet another RPM Challenge. This will be my fifth year participating in the challenge, and my planned project this year should definitely be my most unique challenge to date.

As I’ve noted previously, this year I will be recording my album entirely using my iPhone. I will record some/most of the tracks into GarageBand on my Mac, and I will do further post-production with the Mac, but I’ll produce every sound with — or through — the iPhone.

As an added challenge for myself, usually I enter into a recording project with an overarching concept. This year, the only concept is that the iPhone is the instrument. Usually I come in with a clear set of song ideas or an overall compositional structure for an album, and quickly arrive at completed songs. (Last year, for instance, I had one song — “Spooncherry” — completely “in the can” by 4 AM on February 1.) This time around I am going to try to just record as much material as I can, in whatever form it may take, for the first half of the month, without working up any of it into a final state. And then I will spend the second half of the month sorting through the debris and trying to make sense of it all.

We’ll see.

I know I am not starting a revolution by making music on the iPhone. Plenty of people are doing a lot more with this than I am. I am just curious to see what I can produce. There is some precedent in my own work: I recorded the theme song to my podcast entirely on the iPhone (using the iPhone version of GarageBand), and earlier in January I recorded a 3-song EP on a Saturday afternoon.

It begins at midnight.