Friday, October 14, 2011

Fall odds and ends


Internet woes
The internet has not been all good to me recently. I often shop online and encourage everyone do so, but when things go bad, it can (apparently) be a pain to get your money back.

My first (and second) internet mishap happened while I was trying to buy an RME Fireface 800 on Ebay. On two separate occasions, after winning the item and paying through PayPal, Ebay removed the listing because they suspected it was fraudulent. I got my money back in full in both cases, but it took a couple weeks for PayPal to go through its formal complaint process. Weeks during which my money was tied up and I couldn't use it to buy a Fireface 800. After going through this twice and suffering through another bad experience (see below), I decided to buy a new interface from a Canadian store. The transaction went smoothly and I had the interface within the week.

Internet woes (Part II)
My internet misfortunes continued as a result of a transaction I initiated in late June when I ordered a mute from an online retailer. This mute has a microphone pickup inside it and I intended to use it in my meta-trombone project, so I was quite keen to get it (and very happy to find a retailer in North-America). But it wasn’t coming. So I contacted the seller in July and again in August. By mid-August I wanted my money back. When I received no reply from the seller, I lodged a complaint with the Better Business Bureau and informed the seller. No reply from the seller.

Early in September, I reviewed the seller’s novel approach to customer service on an internet forum and I informed the seller. The seller promptly went apeshit. Whereas he could’ve taken this opportunity to renew communications with me and apologize for missing my previous correspondence, he called me a liar and publicly insulted me on the forum. Certainly not the professional behaviour one expects from a seller… Regardless, the seller agreed to refund me (less restocking fees). However, the seller also sent an email to the Better Business Bureau impersonating me in which ‘I’ apologized and withdrew my complaint. I won’t actually name the seller (due to repeated threats of legal actions), but I would encourage any reader to contact me before placing any significant order for musical equipment from an online retailer located in the north-eastern United States.

Computer woes
One thing you shouldn’t do with your MacBook Pro is to drop it on the floor. Take my word for it. No need to try it for yourself. This could’ve been much worse, but I got off with cosmetics bruises and a dead hard-drive. This is not too problematic, since I’m rather paranoid about backing-up my data. Up until now, I’ve been using CrashPlan to backup my main drive and while I thought it worked quite well, having spent some time with its restore function has made me yearn for another approach. I’ve since switched to Carbon Copy Cloner and I heartily recommend it for all your OS X backup needs. The best thing about it is that it creates a bootable duplicate of your disk. This means there is no down time and no need to reinstall software (and search everywhere for licence information). Also, it doesn’t put your files in an undecipherable proprietary format that makes it impossible to locate files without using the software in question (that is so 20th century).

Learning stuff
Having successfully demonstrated competency in rudimentary university-level mathematics, I’ve started learning computer programming at l’Université du Québec en Outaouais (in accordance with my previously mentioned epiphany). I’m currently enrolled in the introductory Java programming course, but after going through the Stanford introductory course, this one is a breeze. Feeling inadequately challenged, I also signed-up for a free online course at Stanford in artificial intelligence. I’m one of 180,000 or so students enrolled. That’s nuts.

I’ve also been teaching myself to code in Scheme. Not only because it’s the coolest programming language I’ve ever seen, but mainly because it’s the lingua franca of livecoding (see Impromptu comments below).

To solidify my hold on both mathematics and programming, I’ve been using Java, C++ and Scheme to solve mathematical problems posted on Project Euler. I’ve only solved 19 problems to date and I wish I had more time to spend on these, since it’s great fun, it makes me feel smart and I’m learning stuff. What else can you ask for?

Meta-trombone
Not much development since I last posted about the meta-trombone, but a lot of conceptual work happening behind the scenes (mostly thinking about effects). I’m now very interested in Impromptu and its novel blend of coding, AudioUnits, OSC and MIDI. I’m not sure exactly how it will be involved in this project, but I know it will be. One thing that Impromptu allows is to rapidly create AU by compiling using its AU wrapper. This would make it possible to create some signal processing wonders of my own and also to implement some of my Bidule patches as compiled code, thus increasing efficiency. Normally, there’s little drawback to using Bidule, but my audio to midi patch is a bit of a drain on the CPU and could be improved by moving it to Impromptu. This livecoding platform also has some interesting video applications that I’m more than willing to explore.

Camera (film)
I purchased an old canon film camera and a couple of lens online and found an old light table locally. I’m amazed at how cheaply this equipment can be had… an equivalent digital setup would’ve meant an investment of several thousands of dollars. For a bit over $200, I have seven lens, an SLR and a light table. Amazing.

My main reason for getting these technological vestiges of a previous century is to further explore a collage technique that I learnt from Collin Zipp when I attended a workshop at Daïmon. During this workshop, I created a little video from cut-up and scratched film negatives. While the video is ok, what struck me was that a lot of the individual frames made stunning images in themselves and I’d like to explore that in the coming months, perhaps using this technique to create a short comic.

Saturday, July 9, 2011

Meta-trombone - part II

These last few weeks have seen much development on software side of the meta-trombone controller.  I created a monstrous Bidule patch to instantiate my vision of what this integrated controller/instrument should be.  

Here's a top level view of the patch:


The audio signal from the trombone goes to an audio to midi converter that I developed to extract midi notes from my performance.  It also goes to external outputs and/or Mobius looper through an audio matrix that makes possible all the complex routing that my setup requires.  Finally, trombone sound is also fed to an envelope follower that, depending on performance mode, can modulate the playback of Mobius' tracks (more on this below).

Here's a look at the top level of the audio to midi group:

This group is based on an audio to midi converter I found on the Bidule forum.  From what I gathered looking at this group, it seems to determined pitch by comparing incoming notes to a delayed version.  The amount of delay is based on frequency and sampling rate.  When the two signals are combined, if the frequency matches what was fed to the delay, they are phase-cancelled to near 0.  This process is done for every note we want to evaluate and the note closest to 0 is the one returned.  

The original group work well enough, but it was a bit sluggish for my taste and used up considerable system resources.  My approach, as I indicated earlier (and as implemented by students at Cornell), is to reduce the number of notes the system has to evaluate by tracking the position of the trombone slide and evaluating only for those notes that can be produced at the current slide position.  I'm still working on a reliable way to track slide position, but right now this audio to midi converter works quite well for all notes in first (closed) position.

The midi notes that it generates are sent to Mobius to trigger various scripts depending on the performance mode currently selected.  Before I describe these various modes, let's take at the control interface that I created in TouchOSC that allows me to control this complex patch with my iPod Touch.


The first column to the left selects the performance mode.  The one immediately to its right selects the Mobius track that is effected by the performance mode.  The red column selects the Mobius track that will record performance from either the trombone, an effected Mobius track or both.  The last column on the right turns the audio output for that track on and off.  Finally, the purple button on the bottom determines wether trombone sound is sent to external outs and Mobius tracks.

While this interface may look simple enough, there are many logical conditions that are evaluated behind the scenes each time a button is pressed.  For instance, if Play 1 and Rec 1 are selected, the audio from track 1 is not actually fed into itself.  If Rec 2 is selected and Out 2 is off, the system turns it on automatically.  And so on...  these logical conditions are the core assumptions I built into the system that will make it behave as I intend it to.  A big part of that, was defining how the performance modes would affect the various system states to produce the required results.

Performance modes
Starting with the simplest, in Off mode the midi notes interpreted from the trombone performance are discarded.  In synth mode, the notes played on the trombone modify the playback rate of the Play track in Mobius.  Also, the playback of that track will be modulated according to the trombone performance by virtue of the envelope follower.

In Envelope mode, the track playback is also modulated to the envelope follower, but the midi notes no longer change the playback rate.  Instead, each note played on the trombone changes the playback location of the loop.  Trigger mode is the same as envelope mode, but without the envelope!

Things to do
I still need to complete the pitch recognition system by developing hardware to track slide position.  I also need to develop the midi module element of the system.  Presently, I'm thinking of adding an arpeggiator, a quantizer and perhaps a midi looper.  Next, I will add effects!  Not only do I want to modify the input trombone sound and the loops, but I also would like to map trombone performance to changes in effect parameters and effect selections.  For instance, I could map a filter's cut-off frequency to trombone notes so that playing a certain note would emphasize a specific spectral aspect of the effected loop.

How will it sound?
While there's no way no know exactly what kind of music I'll end up producing once the whole system is up and running, below is a recording of a recent practice sessions that used only the Synth and Envelope mode.  No effects were added and it is rather poorly mixed, but it does, I believe, show promise.  One thing I'd like to note about the performance is that everything you hear, with the exception of the solo later on in the piece, is created from "performance" of the first recorded loop (what you hear twice in the beginning).

Wednesday, June 8, 2011

Playing the loop


After watching the video above, I started fooling around with this technique in Ableton. Using recordings I made with Big Band Caravane in 2006 as a starting point, I isolated the beginning of phrases and swells instead of drum sounds as Thavius Beck does in the video.

Here are some results:



Livelooping application
As fun as it was to pretend to be a dj for a couple afternoons, my musical path does not lie in that direction. I quickly began to consider ways I could apply this technique to a live recording in a given performance. Now, I know there are easy ways of doing this with the full version of Live (clip to midi controlled rack instrument), but I’ve been reticent to upgrade from my free watered-down version in part because I believe a new version is just around the corner, but mostly because I want to limit the number of software I’m using in performance. So far, I’ve been able to do everything with Bidule and Mobius and I’ve only looked at Ableton to work out performance strategies and to try out ideas I find on the web.

Fortunately, it turns out it is easy enough to tell Mobius to trigger a loop from a certain point. The following script will trigger playback from 4/16th of its length:

!name trigger4
Variable newFrame loopFrames / 16 * 3
move newFrame
end
I wrote sixteen scripts like the one above, with different values, and assigned each one to a pad on my Trigger Finger using MIDI binding. Once a loop is recorded, pressing a pad triggers playback from one of sixteen positions relative to length. If I have a one bar loop, that’s every 16th note. Four bars will give me a trigger point at every beat.

Here are some results using that approach with a song from my homage to the cookie monster:



I’m quite excited about the possibilities this technique highlights. I quickly came up with new and interesting ideas using old ones. I found this process of creating from existing material very much in line with my current interest in collage in comic books and video.

Meta-trombone
If anything, these new experiments have only further revealed the need for an integrated instrument/controller: the meta-trombone. My current vision is to make those loop points trigger according to what notes I play on the trombone. After recording an initial loop, this would allow me to continue to play a ‘duet’ with myself since every note played would have a double purpose, serving both as musical material and trigger signal.

Technically, the challenge is to interpret the notes I play into MIDI notes. This is a rather complex problem to solve if I ask the computer to determine the note played from any of 30+ possibilities, but it becomes easier if I narrow the options to something more manageable.

One way students at Cornell have tackled this problem in the case of a MIDI trumpet was to split it in two. First, they developed a system to determine which valves were activated. This limited the number of possible notes (8 or so) and they were then able to track the notes being played.

I intend to take a similar approach for my meta-trombone controller. I’ve already had good results tracking the notes I play in 1st position using Bidule and I think I can scale this up to all positions, provided that I can find a way to determine slide position.

Thankfully, I’m not the first person to tackle this problem and I found some interesting experiments online. The first I looked at was Nicolas Collins’ Trombone-Propelled Electronics (and its various incarnations). To track slide position, Collins cleverly used a retractable dog leash to turn the knob on an optical encoder (figure 4 in the pdf). I considered using a similar approach using string potentiometers, but I was unable to find one with a string tension that would not impede playability.

The wireless options I’ve encountered used either optical or ultrasonic sensors. The composer Marco Stroppa’s work "I will not kiss your f.ing flag" called for an augmented trombone that would use the slide as a continuous controller to change parameters during performance. The solution adopted was to place a red laser light emitter on the outer slide (the moving part) and a photo-electric diode receiver at the fixed end. However, the article seemed to indicate the reliability of the system could be improved.

Ultrasonic emitter and receiver pairs seem more promising. A study trying to link technical ability and movement efficiency in trombone playing had success with an ultrasonic sensor, noting that the system they developed was lightweight and did not detract from playing. They placed an emitter/receiver paired unit at the end of outer slide and (from what I can tell) they measured the distance the sound travelled from the emitter to the receiver, bouncing off the player on its way.

Neal Farwell developed multiple technical systems to adapt the trombone for the electro-acoustic performance of his Rouse. One of them, called the uSlide, is a pair of ultrasonic emitter and receiver that tracks slide position. His approach is different from the one above, since he put the emitter on the outer slide and kept the receiver fixed near the mouthpiece. This seems a more robust approach, but I’ll have to try things out.

Also of interest, the trombone instrument from the Imaginary Marching Band project tracks “slide” position with a ultrasound sensor connected to an arduino. The open-source software developed for the arduino outputs MIDI note information (including pitch bend).

There may be ultrasonic sensors in my immediate future…

Monday, March 28, 2011

My greatest hit

This little video of mine has recently went over the 5,000 views mark.


This is by far the most interest anything I've ever produced has managed to generate. I don't think anything even comes close...  certainly not my ode to the cookie monster or any of my other videos on youtube.

So what made this my biggest 'succes'?  

I'm not sure...  But I know it's not my marketing strategy. I haven't done anything to promote this video (or anything else) beyond uploading it to youtube and adding some tags. This strategy has not generated much viewership for my other videos, so there has to be something about this one that makes it stand out.

Let's see...  It's an animated bust of a nude study in blue conté with a computer generated voice (out of sync with image) telling people about stoicism and existentialism featuring some symbolic logic and my terrible handwriting. The whole thing is technically embarrassing and was likely the result of an afternoon's work (I don't actually remember the circumstances surrounding the creation of this animation).  

So, I'm thinking subject matter is probably a key factor here...  

I got a fortune cookie once that read something like: "The philosophy of one century is the common sense of the next (in bed?)." Now, as any student of philosophy will tell you, there weren't any single dominant philosophy in the twentieth century. Quite to the contrary, there has never been so many people actively involved in philosophical activity, with different viewpoints, opinions, stances...  it's been a very interesting hundred years and you could probably spend a lifetime studying the very recent philosophical past (some lucky people do). However, one philosophy (for lack of a better turn of phrase... it seems odd to count philosophies) that resonated particularly well with people in the twentieth century was existentialism.

One can argue that existentialism has become the common sense of this century. After all, it does seem obvious that you are define by your actions, not by your intents and that the disconnect between what you want to be and what you are or between what you want the world to be and what it is can bring about all the agony familiar to those in the throes of an existential moment.

What I find particularly interesting are the demographics of the viewership. While males of all ages (68%) viewed my video, it is interesting to note that most of the female viewership are teenage girls. I really don't know what to make of this, but I find it interesting nonetheless.

A couple key findings, when I compare the popularity of this video with my other youtube offerings:

  • build it and they will come (even if you dont't tell them about it);
  • english is the language of the web (french is mostly ignored);
  • if content is interesting, people will overlook technical flaws in presentation;
  • folks like their animation;
  • most referrals come from Youtube related videos.
This last point highlights the importance of tags and adequate descriptions...  I'll have to work on this. But mostly I believe the key is subject matter, since this is what drove the related video referrals in the first place and brought me the bulk of my viewership.

Saturday, February 5, 2011

Sampling and sound design with Mobius

Lately, I’ve started exploring the use of the Mobius looper as a sampler to create new and interesting sounds to be triggered by my guitar controller. By doing this, I hope to recreate, in a live setting, a technique I developed in the studio that allows me to create a pitched sound by taking a very short part of a recording (usually vocal) and repeating it many times. I like to think of this as a type of granular synthesis, although, strictly speaking, it's not.

Start with a short loop

The first step is to create a short loop in Mobius. The best way to achieve this is to use Sustain Record, since this function only records when the button is activated and automatically stops recording when the button is released. At this point, you should have something that might sound like this (if you were to sample my voice).



Interesting, but since the repetitions aren’t fast enough, it doesn’t sound like a musical note. There are two ways to make the repetitions faster. We could increase the Rate of playback, which increases the speed at which the loop is played, or we could make the loop even shorter, thus increasing the frequency of repetitions. There are many ways of doing this, but my favourite so far makes use of the loop windowing script Jeff Larson put together. Activating this script selects a single subcycle to play back. Using the script again selects another subcycle. In this way, a single loop can yield many different sounds. Here’s how things sound at this point using both of these options on the above loop.



Well, at least we have a tone!

Sound design

There are countless possibilities to further manipulate these sounds, both in and out of Mobius. For now, I’ve limited myself to three plugins: Discord3, Chopitch and UpStereo. Discord3 is where the meat of the sound design is taking place. It is a formidable effect that allows me to introduce a lot of complexity to any sound. While Chopitch adds character to the sound, its primary role is to shift the pitch of Discord3’s output according to the midi signals sent from my guitar controller. The last plugin, UpStereo, increases the sound’s presence in the mix.

There are also some sound design possibilities within Mobius itself. For instance, the sample can be changed by sampling over the existing loop using Replace (with no Secondary Feedback). When this is done with a rate-shifted short loop, the results are rather interesting. Because the repetitions are occurring at a sufficient frequency, we retain the perception of a tone, but the sound in the loop is no longer rate-shifted and retains more of its initial character.

Another option is to briefly overdub the loop to add further complexity to the sound. However, it is important to pay close attention to the Secondary Feedback setting while doing so, since things can easily get too loud when overdubbing such a short loop at full feedback.

It’s interesting to try out some functions and scripts to hear their impact on the sound. Halfspeed, as expected, will lower everything by an octave. Using the previously mentioned loop windowing script on a rate-shifted loop will produce a wispy high tone. Using a version of this Auto Reverse script will add a lot of ‘dirt’ to the sound.

Sound examples

These examples all start from the same short loop recorded in Mobius.

Example 1: Loopwindowed ->Discord3 -> Chopitch -> UpStereo


Example 2: Loopwindowed ->AutoReverse  ->Discord3 -> Chopitch -> UpStereo

Example 3:Loopwindowed ->Rate+24 -> Discord3 -> Chopitch -> UpStereo


Example 4:Loopwindowed ->Rate+24 -> autoReverse -> Discord3 -> Chopitch -> UpStereo

Example 5: RateShifted -> Discord3 -> Chopitch -> UpStereo


Example 6:RateShifted -> autoReverse -> Discord3 -> Chopitch -> UpStereo


Example 7: RateShifted -> LoopWindowing -> Discord3 -> Chopitch -> UpStereo


Example 8: RateShifted -> Overdub -> Discord3 -> Chopitch -> UpStereo

Friday, January 28, 2011

Computational Epiphanies

A few weeks ago, I came to the realization that all my endeavours are not as disparate as I had previously thought.  There is one thread that runs through all of my interests and that could be used to weave a coherent story from my seemingly incompatible creative output.  I've realized that my writing, my drawing, my music and my video work all involve a computer and would not be possible without a computer (at least using the methods I've developed and on which I presently rely).

This may appear trivial at first glance.  After all, a great many people use computers on a daily basis to do a lot of different things.  What I find interesting about this insight is that it can help me overcome one lingering problem with my refusal to specialized: the impossibility of dedicating enough time to completely master any subject.  It's not that I'm lazy, but there's simply not enough time in a day to do it all at once.

However, I've previously learned that working on general skills (reading, writing, mathematics, logic) can yield benefits in all areas of interest.   It's becoming clear to me that becoming a better computer user would make me a better composer or writer (or comic book artist or video filmmaker).  This is why I've decided to dedicate a large part of this year's free time to learning the fundamentals of computer programming and software engineering.

I know some may be scratching their heads at this point, wondering how computer programming will make me a better musician/composer, comic book artist, writer, or filmmaker.  Well, I'll try to explain...

Computers and music

This should be an obvious one given my current musical obsessions.  More and more, I'm thinking about how to use my computer in novel ways to create interesting music both in performance and in the studio.  Lately, working with Max/MSP or Bidule, I've often wished I were better at coding to create either plugins or software to help me realize my musical vision.  And don't get me started on live coding (man that looks like fun, look at the video below).  To help me get there, I've started working through a wonderful book I recently purchased: The Audio Programming Book.



Algorithms are Thoughts, Chainsaws are Tools from Stephen Ramsay on Vimeo.

Computers and visual art

This is another field where computers have become indispensable.  Knowing how to program would not only mean that I could write some JavaScript to automate fonctions in Photoshop, but I could also write plugins or standalone software that would interact directly with the images.  Already, with my beginners' knowledge of Java, I was able to write a few simple programs that performed operations directly on the pixels of an image (moving them, changing colours, replacing them with pixels from another image).  There's a lot to explore on this front.

Computers and writing

This is one area where even my most favourable readers might think I shouldn't expect to gain any benefits from learning to program.  I'll happily grant that there hasn't been very much development in terms of digital creative tools for literature, but I've had an interest in computer generated (or assisted) prose and poetry since reading Charles Hartman's Virtual Muse about three years ago.  Hartman, under the influence of John Cage, is interested in using computers to introduce elements of chance and randomness in the writing process.  He wrote programs that either randomly reordered previously written lines or that generate grammatical sentences based on syntactical templates and a lexicon.  He would use the result to shake things up and generate new thoughts, in effect collaborating with the computer.

I would prefer to adopt an approach similar to how we think of musical filters and effects in music production.  An input text would be effected by a program following a set of instructions before displaying the result.  I've recently become aware of the work of  Daniel Howe whose RiTa toolkit for Java (as presented in his doctoral dissertation) aims to give writers the resources to harness the power of computation to produce prose and verse.  There's also the folks at the Expressive Intelligence Studio at UCSC and the group running Grand Text Auto that bear watching.

Computers and computers

For all these reasons, I'm very excited about learning to code.  In fact, I've already started by following introductory lectures in Java programming from Stanford University.  This class has been great so far and I quite enjoyed coding my own version of Breakout and some other fun games.  I'm grateful to Stanford for making these courses available freely and I intend to go through all the lectures for the introductory classes and, in time, some of the more advanced stuff as well.  I've also signed up for a certificate in information technology at l'Université du Québec en Outaouais and I'm working through the math pre-requisite this semester.