Because sometimes you just need some quality time together off the beaten path.
Because sometimes you just need some quality time together off the beaten path.
Today is the first time in my life that I played piano, of course saying I played piano is relative to the fact that I’m just coming to grips with the idea of where the notes are. Regardless of how poor of ability I am, I still was able to identify the keys, play the chords and strike the notes for a tune nearly anyone could recognize. Whole notes, half and quarter notes, measures, bars, tempo, keys, chords, these are my Lego’s that I’m trying to fit together.
Let’s talk about DNA and sequencing though what I really want to discuss is rhythmic sequencing. It won’t be a deep discussion about genetics, just a note or two. While I certainly respect the work and have learned more than a little about our double helix blueprints from Kary Mullis to Craig Venter, matter of fact they are icons in my book of people who have inspired me, there is nothing from their body of work that will help me and whatever deficit of genetic material and gray matter I would wish I otherwise possessed that might help me better understand this wild beast known as music and more specifically sequencing.
There is a correlation I see between our DNA and music and that is; patterns. Our genetic base pairs are made of Thymine, Cytosine, Adenine, and Guanine known as TCAG and these four molecules are organized into patterns that repeat billions of times in order to bring sense and meaning to the building blocks of our very being. In popular music we are typically working with 4 beats per measure and 4 measures (4/4 music) and it is through these repetitions that the order of music and its rhythms become the most common sequencing’s of sounds that are appealing to us humans at this time in our history’s.
If you think that the study of genetics is difficult, that is where my mind is currently at in regards to building my first musical sequences. To the listener they may hear 120BPM and never give a second thought to the fact that they are listening to two beats per second, but a second is a lot of time. Try counting as high and fast as you can in one second, I can get to seven or eight as I speak the numbers out loud. So between the beats are pulses where things like snares could be triggered.
When you consider that it is not uncommon for a song to have upwards of over 100 voices that come in and out of the mix during the course of the track, you have to understand that all 100 of these have their own time signatures and hence they get sequenced in to the mix at a particular moment or their individual elements are conforming to the timing that has been dictated by the sequencer and clock timings.
While it might be too ambitious for me to begin considering even two voices simultaneously, just understanding one sequenced voice has been a hurdle. Yesterday I wrote about clock signals, it is from these timing devices that the master clock is used to set the rest of the voices in the track to work off the same beat structure. Okay, so what gates and triggers in what sequence make for interesting rhythmic patterns? This is where I need to start experimenting with the basics such as a bass line or pattern for a kick drum. I could use an already written midi track and set it down as the basis to start building a song upon, but then I don’t feel that I’d fully understand the fundamentals.
And so I struggle trying to learn the basics of when to trigger a pulse, send a gate to a voice, or attenuate a voice that was just triggered or modulated for pitch. Someday I will come to grips with this genetic soup of sounds and timings that feel like they are just beyond the horizon of my comprehension.
Music to the casual listener is mostly about rhythm, melody, and lyrical content. To someone learning how to make music one of the first lessons that becomes obviously apparent is that music is all about timing. Clocks, triggers, gates, pulses, PPQN (Pulses Per Quarter Note), randomness, steps, and modulation of all of these play a central role in how the Eurorack modular system is going to stay in sync, create and evolve rhythms, and move your piece forward, even when going in reverse.
I’ve chosen Pamela’s New Workout (PNW) from ALM as my master clock, I had tried the Arturia Beat Step Pro before deciding I wanted everything in the box. Once settled on a clocking device there is still an incredible depth of knowledge that will have to be acquired due to details regarding the division and multiplication of the signal, if you will apply a Euclidean rhythm, or maybe you’ll choose to randomize its timing signature.
The PNW has eight outputs and each of them can be independently clocked. As the master clock I patch out from this source to sync other modules that need to stay in time with each other. Even a random clock event should typically be in time with the rhythm of the piece that is being created.
Each of the eight outputs can in turn be divided or multiplied from within the PNW and by routing a clock signal to something like the Doepfer A-160-2 Clock Divider or the Animodule Tik-Tok Divider/Multiplier. If I want a random synced clock I have a couple choices here too by taking a PNW output into the SSF Ultra Random Analogue or into the Makenoise Wogglebug, both specialize in random clock signals. I can take one of these external clock outputs into my Stillson Hammer sequencer and adjust the timing on a per track basis right within the Stillson, same goes for many sequencers. In all I currently have more than a few dozen devices that benefit from having a clock signal sent to them, while nearly everything else that follows these modules is the recipient of those perfectly or randomly clocked timings.
It’s daunting to try and think about the potential of which modules and sounds would benefit from particular timing signals. While I have more than a few passive signal multipliers also known as Mults (clock signals do not benefit from powered mults as CV signals do, as clocks send pulses that are not reliant on perfect voltage signals to convey their information accurately), I will still need to come to an understanding about which devices I want to send every manner of timing in order to achieve whatever it is that is stewing musically in my imagination.
Ten days ago I wrote a blog entry about signal routing and the inherent difficulties encountered as an audio system becomes more complex. In that entry I explained that I’m trying to figure routing out so I can start learning about managing the quality of the audio signal for recording. I’m back today with a note about how my lessons for understanding audio levels is progressing.
First of all I have a potentially “hot” signal leaving my Levit8 mixer and going into my Expert Sleepers ES-8 audio interface that sits between my modular gear and my DAW. Once in Bitwig I can have multiple simultaneous audio channels from my modular gear for recording, but how do I modulate those signals so they don’t clip? In order to answer that (possibly incorrectly) I watched a number of videos regarding mixing and something called the K-System. Instead of regular VU meters I’ve opted for the K-System that was created by Bob Katz. I’m not going to explain why today, but I’ve come to the conclusion that it’s the metering system for me.
Once a signal enters an audio channel in Bitwig it typically clips after I’ve armed the channel. Back at the Levit8 I need to attenuate the signal down, way down and super low in some cases. From there I throw a Fabfilter Pro-L Limiter on the track and adjust the output until I get an average reading of 0 dB on the meters. This feels like a mad science where I’m certain I’m missing something totally obvious, to everyone but me. I’m yet to tackle where compression and EQ come in to play.
There is a bit of dilemma here as I actually have two work flows I have to move through. On the one hand I have a bunch of mono outs from three Levit8’s, a Mutant Hot Glue, a Blinds, and a floater I can plug into the Planar in my skiff or into the Moog Mother 32. Using this flow I take the outs into my Mackie 1202-VLZ4 and that signal is fed into my Universal Audio Apollo Twin USB which drives my audio monitors. This set up is great for just turning on the synth with no intention of recording anything and getting right to patching. It also allows me to leave the synth off and work just with Bitwig or use the Apollo for playback of audio from videos or other PC based media.
On the other hand if I want to record what I’m making on the synth I have to take those outs from above and feed them into my ES-6 and ES-8 modules from Expert Sleepers which are then delivered via a USB connection to my PC (Windows 10) and then to the Apollo Twin Duo. As I said in the previous entry, I require ASIO4ALL to make this work. I’d like to rely solely on the ES-8 and its helpers the ES-6 and ES-3 and feed that signal into the Mackie, but then I’d still have the issue of where to send my PC audio and how? Maybe a larger mixer?