Production Paper #1

Introduction:

Deciding what I wanted to produce was quite a time consuming task for me. I wanted to include all the elements including audio, video, post-production for audio in this project. To make a plan was essential for me. My aim was to integrate all of what I had learnt into the semester 4 production.
I wanted to make use of the most of the resources that were available to me, making use of as many microphones placed at as many positions as possible based on books regarding the same as guideline. I took into consideration that it is always possible to remove something from what I have than to not have something.

I had kept a documented record of all my sessions in the 3 studios. I could at this point very well point out the differences and the different tonal characteristics my recordings had based on the positioning and microphone type and pattern. Reading books about audio recording, mixing, mastering, the music business, song writing and as many I could. Its very interesting to me how everything is related to every other thing in some or the other way and when the right measures are taken to bring everything in harmony, the result is beautiful.

To me a microphone is like a color that a painter selects from his palette. – Eddie Kramer
With all these factors in mind, I began with my experiments most of which didn’t have very nice sounding results but gave me deep insights into the art of sound production. This document will contain most of the documentation taken during my recording, mixing and mastering sessions. The reason to prepare the log book for me is simple, to get better at what I love to do. I hope that I can learn things that I can make use of in the future.

Mission:

At the end of the 3rd semester, I had started to pen down what I wanted to do and how I was going to achieve it. I wanted to involve everything I could into the production; here are the few things I had in mind:

1: Instrument Recording:

Making use of the various microphone-positioning methods for various instruments, stereo microphone placement and the use of suitable microphone polar patterns to achieve desired results was a top priority. I also wanted to record some out of the ordinary studio instruments, some hand made percussions made in Indian, Indonesian tribal areas to see how I could fuse them in my mix in accordance with other instruments and adding interest to the mix. I wanted to combine electronic music production with audio recording to try and achieve something that would have a good combination, the best of both worlds.

2: Outdoor Recording:

My interest in outdoor ambience and other recording has been ever growing. I envisioned my track to start off with some ambience sounds which would represent the situation a person listening was to be put in, which I though would help achieve a desired emotional response. This will be based on my studies on sound and the psychological response by the human brain and the auditory system.
Outdoor recording gear for was used with microphones such as the inbuilt microphones in the zoom h4n field recorder, the Shure Ksm 141 and the Rode Ntg-2

The ambience recording was done at a cemetery at Choa Chu Kang.

3: Outboard Gear Signal Processing:

I wanted to use as much outboard signal processing as much as I can. I have noticed that there is a difference between digital and analogue compression.
What interests me about outboard and analogue gear is the characteristics it adds to the signal. Hysteresis at low level signals and distortion at high level signals and harmonics were present in the retro recordings, and audiophiles like me tend to like that warmth in a sound rather than digital clipping. I would be using the Neve 8816 summing mixer in studio 3, the SSL Bus compressor in the AWS 924 and other analogue gear and try to get those elements in the mix and see whether it makes the sound better depending on my reference tracks.

4: Electronic Music Production:

Figure 3: My Mac Book Pro with a session of Ableton live.
As I was first introduced to audio engineering and music production accidently after I came across an electronic music production software, I wanted to do some electronic music production and fuse it with real recording. My digital audio workstation for creating was mainly Ableton Live but not restricted to only one. I frequently went to software such as Propellerhead’s Reason via Re-wire, Cubase, Logic, Pro Tools. I made use of the feature which I liked best amongst all the digital audio workstations. I also used Fruity Loops, a software by Image Line, which helped me break through a writer’s block I was facing when composing and getting some interest in the sounds. All this with keeping in mind bit rates and sampling rates of audio files. What fascinated me in electronic music production is how I could make interesting sounding music even with out a deep knowledge of musical theory. It also helped me to improve my musical writing ability.

5: Use Of Prototype Midi Instruments

Figure 4: Image of Midi Shoes (Musical MIDI Shoes, n.d.)
I wanted to make the use of different approaches towards making music and controlling various parameters in my electronic music production via Musical Instrument Digital Interface.
Another new approach was using the Nintendo Wiimote, which is a gaming controller which has accelerometers which can be converted to midi signals when connected to a computer via Bluetooth, it can send notes, control X-Y axis parameters and much more based on the way it is configured. (See Figure 4).
Figure 5:  Image of Wii Controller (Wiimote, n.d.)

6: Use Of Midi Controllers:

I used a Behringer BCF 2000 MIDI controller, as a control surface for my sessions. It gave me more control, I didn’t have to reach the mouse to adjust my levels, I could do it on the faders provided. Thankfully the BCF 2000 comes with many modes, my favorite is the baby HUI, which emulates the Mackie Universal Controller, saving me a lot of time mapping individual faders. I could also switch from modes and assign my custom controlled parameters and record them. This enabled me to create some interesting automations on some synths and effects.
Figure 6: The BCF 2000 Controller.
Other Midi controllers were the Korg Nano Pad for drums, the Key Rig 49 for keys.

7: Monitors:

The following are the monitors I have used to mix on:
Sennheiser HD 280 Pro, The KRK Rokit 8’s, Dynaudio BM 6A.
I have also monitored the track on various car stereo systems, radio speaker (mono), Iphone headphones, laptop speakers to hear how the mix sounds in different surroundings.
The APC-20 by Akai was also very useful in making interesting patterns and beats.
Figure 7: Akai APC-20 Midi Controller.
“Getting ready is half the job” – Anon

What Genre?

This was a question that was bothering me a lot, and it was taken most of my time in the semester. As I am used to listen to all kinds of music, It was difficult for me to come to a conclusion about the genre of music I wanted to produce. It bothered me so much, that I would start on a certain kind of genre then after working on it for a few weeks, I would completely delete the whole session and start over fresh. The following log will show how inconsistent I was, and how I spent time rather wasting it than making up my mind.
Day 1:
2nd July:
Deciding which genre / type of music to make.
Day 2:
3rd july
Listening to various genres.
Potential options are hip hop or psy trance.
Day 3:
4th july
Confirmed to make pay trance. Research on psy trance music. History, listening to infected mushroom and other bands.
Day 4:
5th July:
Listening to tracks on the ssl was 900 console and using the mono and left channel phase to find out the panning placement o instruments.
Day 5:
6th July:
Artist Saphira played and recorded the guitar.
Using the u87. acoustic classical guitar. different styles played.
Day 6 onwards:
Research on Forums and articles:
SOS, music radar, google, Researching and downloading plugins:
Spectrum analysers etc.
July 12, 13:
Working on ableton live:
I started off with taking in mind a psy trance song.
With more research I came across an interview by Armin Van Buren.
He said that a track should be made according to the trend, as in dance music we have to think about the place it is going to be played in, that is the clubs where the low frequencies, bass matters and that the DJ has a track before and after the track, hence if it is way different from the current trend of the music, the DJ simply won’t play the track.
Hence, I thought I should stick to the new trend, although I wanted to do something different I thought I would do something that had both real world samples and electronic.
Hence, I tried to tell a story with the song I was going to make, I wanted to take the listeners on a sonic journey to another dimension.
I wanted to start of the track with some waterfall and some red Indians reciting some chants. By this i wanted to take the listener back into time where there was beautiful nature.”
After all this, I have a track now which has the drum beat of a hip hop song, the synth of a house track, some drum rolls like in rock music, distorted guitars, tabla and more.
So, what exactly do I categorize my song as? That’s when I decided on having multiple reference tracks as references to mix different parts of the song.

Recording Sessions:

Drum Recording #1:

15/07/2012:
On this day, the following were the microphones used on the drum kit:

Kick:

Figure 8: The AKG D112 (left) and Shure Beta91 placed at 1” from the skin.
AKG D 112 :
Figure 9: Image of frequency response of AKG D 112 (D-112, n.d.)
Figure 8 shows the frequency response of the AKG D 112 microphone. A great deal of what I researched would pay its respect to Bobby Owsinski and his books “The recording engineer’s handbook” and “The mixing engineer’s handbook”. The Recording engineer’s handbook has a dedicated chapter telling more about the drum kit microphone technique. Bobby Owsinski talks about visualizing the drum kit as a single instrument instead of a considering each element of a drum kit differently.
I feel that the drums are sort of like an orchestra in the sense that there’s a lot of instruments, so I don’t make any attempt to isolate drums from one another or to do anything that would take away from the overall sound. For instance, if you hit the snare, the whole drum kit rings and vibrates. In my opinion, that’s a part of the sound of the set that you want to keep. So I don’t make any attempt to narrowly focus mics or baffle things off or anything like that. —Wyn Davis” (Owsinski, 2004, p. 111).
Wyn Davis’s statement is quite interesting, but I believe that the micing technique should be according to what certain result is desired of the microphone. At the start of the session I placed the D112 about 1 inch from the skin of the kick. The reason of this was to get the sound of the beater. That click sound. Looking at the frequency response of the D 112, I could see that there was a boost at around 100Hz, 4kHz, and 12kHz. This would mean that I would get a nice low end (depth), and the slap in the kick. These terms have been given in the Subjective Audio Qualities graph in the Sound on Sound Magazine excerpt provided by SAE. For the kind of kick I was aiming for, I wanted the low end as well as the slap sound of the kick, Hence I placed the microphone up close to the skin/beater. Shredded newspaper, pillows, can be used to dampen the sound of the kick. One of the tips in the recording engineer’s handbook says to place the AKG D 112 about 3 inches below the beater, it would provide a substantial amount of attack without sounding too “clicky” that is more pronounced in the 2kHz region of the frequencies. Steve Albini has said in the recording engineer’s handbook that he feels that the AKG D112 has a hollowed out sound and not much of a mid range. Which according to me is good, as I would not have to deal with muddiness in the mix later on. Many engineers in the book have talked about placing the microphones in the drum off-axis and facing the center of the kick drum I tried this as well, it had a boomy character to it without sounding too overpowered. The amount of ”clickyness” and ”boomyness” seemed to be equivalent.
Figure 10: Frequency response of the Shure Beta 91. (Shure Beta 91A Instrument Microphone, n.d.)
In figure 9 is the frequency response of the Shure Beta 91 microphone. Shure Beta microphones are designed for application specific uses unlike other microphones which can be implemented on a wide range of sound sources. The Beta 91 was designed for use on the kick drum and the Piano. What I like is the flat surface design of the microphone  and the unique half cardioid polar pattern.
Figure 11: Diagram of the pick up pattern of a half-super cardioid microphone. (The Directional Boundary Microphone, 2009)
So what I understood is that the half-cardioid condenser microphone, the Shure Beta 91 does is capture sound source mainly from one direction. In this case the front of the microphone. The placement of the Beta 91 in my application was different though Instead of placing it on the ground surface horizontally.
Figure 12: The Shure Beta 91 Placed on the wooden container of the Neumann U 87.
As you can see in figure the Beta 91 is placed on a wooden container, which is the container of the Neumann U87 microphone. When I monitored the output of the beta 91, I would feel the rumble cased due to the vibrations transferred from the wood of the drum to the case which was rather something that I was not going for or wanted to record. Then I tried placing the microphone vertically suspended (hanging). At about the same height as the position of the beater, that’s when I noticed that I was getting a much better sounding Low end with just the right amount of slap I the kick. As the frequency response of the microphone, which can be seen in figure 9 on page 19, the response of the microphone is rather flat unless the contour switch is enable which caused a rather wide cut in the 400Hz region of the frequency response This would get rid of the muddiness in the sound. There is a peak in the frequency response at the 7 and a slight increase at about 15kHz. His would give a nice low end detail with the upper harmonics and the sound of the beater. I noticed that suspending the microphone vertically meant that It would need some sort of a suspension system to hold itself without moving around due to the sound pressure in the drum. Hence I made use of pillows, which also dampened the sound of the kick drum and acting as a suspension of the microphone(s) including the D112.
Figure 13: Beta 91 frequency response after run through the Waves plugin called PAZ Frequency.
As seen in figure 12, what I did is I inserter the Waves PAZ frequency plug in (VST) which is a frequency analyzer. The frequency response of the beta 91 shows a heavy low frequency content upto 125Hz and a slight dip at around the 250-500Hz area. This is good as that’s where the “muddiness” usually is. Then there is visible boost at 1.5kHz, a dip at around 4kHz and good response in the 8kHz to 15kHz this would give me good high frequency information of the kick.
Figure 14: AKG D 112 frequency response through the Waves PAZ Frequency.
Similarly to compare the frequency information of the AKG D 112 and the Shure Beta 91, I could easily see that the Beta 91 had a smoother curve and more prominent in the high frequency area.
The D112 does has frequencies in the 250-500 area. If undesired, I could cut away those frequencies using an equalizer.
I also think it would be helpful to make frequency pockets. I would cut the highs from about 6kHz in the D112 as there is not much SPL in that region whereas the beta 91 has much more frequencies. Using this, I would get a kick with a lot of lows as well as highs without any muddiness. Along with triggering these kicks I would also include some signal generator which would be gated and would send somewhere around 63Hz to add any low end if needed to the kick sound.
Shure Beta91
Figure 15: AKG 112 and Shure Beta 91 in kick drum.

Snare:

Shure Beta 57A (Top)

Figure 16: Image of Shure beta 51A. (Beta 51A, n.d.).
Refer figure 15. I used the Shure SM 57A to record the top of the snare.  Care was taken to avoid the microphone to move when recording. As the microphone stands were worn out, they would slowly and gradually tend to change their set angle and position due to gravity and weight of the microphone. To deal with this and to avoid changes in the amplitude of the recorded sound and to stabilize the microphone stand, I approached a very simple method. I placed a counter weight on the other end of the microphone stand.
Figure 17: Beta 57 a frequency response. (Beta 57A_large, n.d.).
This prevented from movement in the microphone. Thus giving me consistent recording levels.
Figure 18: The counter weight.
In this case I used the suit case of the sE z5600 tube microphone as a counter weight as it did the job. This was a simple yet effective measure. (The microphone was of course at this time not contained in the suitcase).
Figure 19: Shure Beta 57A on snare top. Frequency response.
Above is the screen shot of the frequency response of the Shure  beta 57A placed on top of the snare. According to the “Subjective Audio Qualities” chart provided by SAE published by Sound On Sound Magazine, the following are the qualities and the frequencies of a snare. Fatness : 240Hz, Bite: 2kHz, Crispness: 4kHz-8kHz.
As we can see in the frequency response, there is fatness in the snare sound as the 250Hz as a peak. Although there are frequencies in the 63Hz and below. I would engage a High pass filter at around 80Hz to get ride of the rumble. This is probably the leak from the kick and rest of the drum kit. If the mix gets a bit more muddy, I would dip about 4-6 dB of the 500Hz to avoid muddying up the mix.
Figure 20: Frequency response of Shure SM 57 (snare bottom)
Figure 21: Shure SM 57. (Shure SM 57, n.d.).
Figure 17 is the frequency response of he Shure SM 57 positioned at the bottom of the snare drum. As the location of the spring of the snare if on the bottom, there is more presence of the highs as we can see a higher peak at the 7kHz area in the chart. It is also crucial to phase inverse the signals. Although I had inversed the signal on the recording console itself. I would boost a little of the 2kHz of the snare bottom to get more of the bite in the snare. Also a bit of the snare top of needed. Somewhere around 3-5 dB so that it would bring up the bite in the snare drum.
Figure 22: Shure SM 57 frequency response. (Shure SM 57_large, n.d.).

It can be seen how the frequency response provided by Shure and the response we measured with the frequency analyzer are similar in terms of the peaks and dips. Bobby Owsinski said in his book that the “crack” of the microphone doesn’t always come of the snare (top) microphone. He advises the use of a properly positioned room microphone to get that sound. I think this probably has something to do with the reflections of sound and the room ambience and characteristics of the snare drum and the whole kit together. Steve Albini suggests that the drum kit should not be tampered with. It should be kept to the drummer to the settings as it’s the drummer’s instrument. He then says that, if we hear something that irritates us, we should make the drummer listen to it and ask whether it is something that he wants in the drum. If he thinks that it is not what is want the drum kit to sound like, that’s when measures to change the tuning and other things of the drum kit should be considered.
“I like to think that the sound of a drummer’s kit is an extension of his playing style, and changing things on him is as weird as asking a guitar player to play ukulele—it should only be done for cause.—Steve Albini” (Owsinski, 2004)
In my case since my recording was to be in the final product mostly triggered, I asked a few drummers whether the drum sounded right to them after I made them listen to a few of the reference songs. After an approval of about 7 people drummers and non-drummers alike, I decided there were no changes needed in terms of the drum tuning.
There are some microphone isolation techniques given in the book, which included putting a half cut bottle, the top of which would cover the microphone. Although this would change the characteristics of the microphone sound, which I didn’t want. Hence I didn’t apply these settings.

Hi Tom:

Sennheiser E604 (Top)
Shure SM 57 (Bottom)
Even though I recorded the toms, I didn’t use then in the final product. Care was taken as per the placement of the microphone to avoid any bleed from the sound of the tom around. All the low, hi and mid toms were recorded.
Single hits of the toms were recorded.

Overheads (A/B):

Rode NT3
Figure 24: Image of Rode NT3. (NT3, n.d.).
The Overheads were recorded with a A/B spaced stereo pair of two rode NT3s.
Figure 25: Image of frequency response of Rode NT3. (NT3_freq, n.d.).
Above is the frequency response of the Rode NT3 microphone. The response is almost flat with some boost in the 8kHz area and a slight increase in the 150Hz.
As there is a near to flat frequency response of the microphone, it would do great of over heads. Which would record the hi hats, the crash and the cymbals.
I would engage a hi pass on the stereo track of the overhead which include the hi hats and use it as hi hats.
A duplicate of the stereo track would also be used a room track, individually cut and duplicated of the kick, snare, toms, etc. with frequency pocketing for individual instruments.
Use of all the recorded track / samples.
It should to noted that even though I recorded multiple times. Drums, Guitars, Vocals, Foley, most of them did not find their way in the song. This is just because I didn’t think they suited right in the mix, It sounded off and wasn’t even close to the reference.
I took care that the air condition system was turned off to lower the noise floor of the recording room.
Standing waves, flutter was noticed and hence I placed two styrophonme audio dispersion chunks of thermocol on the two window panes in studio 2 recording room.

Guitar Recording:

16/07/2012:
Guitar recording was done in studio 2. Guitar was tracked through the Line Pod Line In input.
Two separate outputs were recorded. 1 output being stereo output processed and the other from the studio out to the Line in of the AWS 924 console. I started off by listening to some defaults while the guitar was played by John Gratien in the control room.
After tweaking some parameters of the preset, I recorded some riffs and chords. These were recorded based on the songs that were sent to John prior. These were in the Psy-Trance Genre, hence the guitar later in the production process sounded different then what needed in the song in regards with the reference track.
Figure 26: Line 6 Pod X3 Guitar and vocal processing unit.
The Guitars were recorded with the Line 6 Pod X3. As seen aboe the in-built tuner for the guitar was used to get the guitar in tune prior to recording.
Figure 27: Presets being changed in the Line 6 Pod
I tried to record the effected guitars dry. That is the remove any delay / reverb effects on it. I could add these later in the mix, as it would give more control on the sound. Settings like the guitar amp type, the microphone type. Off-axis placement and on-axis placement was experimented with.

Drum Recording #2:

On the same day, 16/07/2012, the drums recorded as well.
Hi-Hats:
The Hi Hats were recorded with the Shure SM 57s.
Figure 29: The U87 as Room Mic.

Room Microphone:

The Neumann U 87 was used as the Room Microphone. Raised to the level the height of my ears from the ground to get the same feeling as standing in front of the drum kit.

Drum #3/Foley Recording :

On 03/08/2012 Another recording session was done. This was primarily focused on Foley recording. Recording of different sounds along with the use of different microphone technique placement on the drums and the instruments / subjects for the Foley recording.
The microphones used in this session were :
1: sE Electronics Gemini:
Figure 30: Image of frequency response of sE Gemini. (Gemini – Discontinued – Replaced by Gemini II / III, n.d.)
The frequency response of the microphones is very impressive as it has a 12AXL input valves coupled with a 12AUL valve on the output stage of the microphone. This gives the microphones the characteristics of an vintage tube microphone which are warmth, it is rather incorrect to say that it gives the analogue warmth as what gives a sound the analogue feel is the combination of various factors. These include the instruments, musicians, rooms, microphones, pre-amps, processors and the effects. Although in this case I am referring to the character that is added to a signal as it passes through a certain recording medium. I decided to use this microphones to get that characteristic sound.

2: Neumann U 87:

Along with the Gemini, I used the Neumann U 87
Figure 31: Image of frequency response of U 8. (Switchable Studio Microphone
U 87, n.d.)
The frequency response of the U 87 compared to the Gemini shows that the lows are rolled off smoother than in the U 87’s frequency response. Although the two are pretty identical, except that the U 87 is not a tube microphone. It hence does not add any character to the sound that a tube valve would add. I would later have the two sounds mixed in serial or parallel based on what is required compared to the reference to get the sound that I would want. More microphones would mean having more options, and more is always better then having less. I can always take away frequencies but adding ghost frequencies to a sound makes it loose its touch of reality.

Ambience / Outdoor Recording:

10/08/2012
I had earlier planned to record ambience sounds by my own using the zoom h4n hand held remote recorder, but I came across the film students who had planned to record some video footage at a cemetery located in the choa chu kang vicinity. I decided to join them along with Thierry, Ikhsan and Louis of AEDF 911 batch. We split up in two groups. One group had the H4n, connected to the XLR inputs were 1: Shure KSM 141 and 2: Rode Nt3.
The KSM 141 was set to omni pattern, this pattern was decided for its ability to capture sound from all the surroundings. Hence this wood be a good choice to record ambience. So when we would come across an area which had a good level of ambience, We would use the KSM 141.
Where as the Rode nt3, super cardoid intended for use to record subjects far away from us.
This was effective when we were walking and came across water falling from a sewage. Aiming the microphone towards the source of the sound gives a better level.
The Other two groups were using my Mac Book Pro 15” connected to an M Audio MBox. Connected to this was the KSM 141.
At one point I was in panic as I found out that the rain had caused a puddle on the touch pad of my macbook as it was dark and the other two were recording. As they noticed that there was a puddle of water on my touch pad, they started to take it off. That’s when my macbook started to behave oddly and it started falsely identifying cursor movements and clicks.
Fortunately, they were able to return my macbook back to me in working condition.
The recording on my macbook was done on Adobe Audition, Input from the Mbox preams.
Acoustic guitar and Vocal Recording:
Artist: Saphira
Guitarist: Saphira
Microphones: U 87
Shure Beta 57a
Shure SM 57
For the acoustic guitar I placed the U 87 at the opening hold of the guitar and off-axis and the Shure beta 57A at the around the 12th fret.
That would give me a full sounding guitar the lows at the hole and the highs at the frets. The microphone positioning of the vocal was the U87 placed very close to the singer with a pop filter. I told the singer to sing to the microphone as if she was singing to a baby’s ear. What I got was very soft vocals which were what I desired.
For the louder parts I told the singer to be two fists away from the microphone. The microphone was placed above the lips and placed so as the breath was not interfering the recording.
The microphone was also positioned upside down higher than the height of the singer so as it positioned upside down. This would make the singer to sing with the mouth upwards, which would get a fuller sound. More body in the frequencies of the singer.

Mixing:

Bobby Owsinski’s book, “The mixing engineer’s handbook” says that great engineers think about mixing in three dimensions which are “tall”, “deep” and “wide”. Which would represent the present of frequencies, the depthness in the sound and the wideness of the sound (stereo width).
Digital Audio Workstation:
1: Ableton Live:
Figure 32: Image of Ableton Live. (Ableton_live, n.d.).
The recorded files were brought into Ableton Live. The session was set as 16 bit and the sampling rate at 44100. I made sure all the recordings and samples were at the same if not, they were converted. If in higher resolution they were applied dither to avoid conversion errors.
Figure 33: Drum Rack in Ableton.
The drum group consists of 8 tracks in the final song. The drums were triggered by the Live Instrument Drum Rack. The kick sample which best suited was brought in the in the rack.
Figure 34: Compressor on Shure Beta 91.
There was a compressor inserted in the beta 91, the attack is about 4 and release of 6. The make up gain is set to the amount of gain reduction happening during compression of the audio.
To see what exactly the compressor did:
Figure 35: Waves C1 compressor on the Kick.
Here are the Pre-compressor and Post-compressor wave forms of the Kick.
Figure 36: Kick before Compression.
Figure 37: Kick after Compression
These were recorded via resampling in Live.
As we can see the overall dynamic of the wave form is increased. The attack, release and decay have significantly increased in amplitude.
Ratio- 4:1, attack 3, release 6. These settings were used so as to have substantial compression on the kick. Release time was set to 6 because compression was not happening when the consecutive kick appeared, which would in turn change the dynamic of the kick’s attack.
The kick sounded louder and more full after compression.
As the drum rack has 3 kicks triggered together.
Beta91_kick, d112_kick and Kick Exile 4 (sample pack), to see is they were fighting in terms of frequencies, I had to run each of then through a frequency analyzer to see.
Figure 38: Frequency response of beta 91 in drum rack.
Figure 39: Frequency response of D112 in drum rack.
Figure 40: Frequency response of exile kick sample in live.
Comparing and analyzing the three kicks and their frequency responses, I could see that there was a build up of low frequencies which can bee seen of the summed frequency response of the three.
Figure 41: Combined frequency response of the three kicks.
Figure 42: Frequency response of all kicks in RMS.
There is a visible build up in the 250Hz region as well, this would cause the muddying up of the mix in the later part when additional instruments are added to the mix.
Hence some equalization with a cut at around 250Hz would be inserted in the kick bus.
Figure 43: Kick bus with EQ bell cut at around 250 Hz.
To find the right frequencies regions which would later cause muddiness, I narrowed the Q setting on the equalizer and swept through the frequency range. With listening, when I could hear an area which sounded as if it is muddying up the kick, I cut it to a level where it is not noticeable.
I did not go extreme on the settings as it would cause the other frequencies to alter as well.
The compression was done based on the guideline given by Bobby Owsinski. I started with the slowest attack and the fastest release on the compressor. I increased the attack until the kick began to sound dull then I adjusted the release time till the volume is back to 90-100 %. He says that the compressor should breathe in time with the song, I tried the same with most of the compressor settings. Although I went to drastic compression and used as sends when I wanted to use compression as an effect with the use of parallel compression.
X musi
Snare:
The snare in the song is a combination of the snare top (shure 57a and the sm 57 bottom) and 4 other sampled sounds.
Figure 44: The drum rack comprising of the snare samples.
As we can see above, the snare has multiple samples, all having different volume levels and a clap panned slightly left.
Figure 45: Reverb on clap-02
I made sure that there was no phase issues in the snare samples as that would cause the snare to loose its fullness and some frequencies might cancel each other out. The layered snare drum gives a better sound than just using a single sample.
Figure 46: frequency response of the snares together.
The frequency response shows that there is a peak at around 150-250 Hz and there is visible low frequency presence. I would hence apply a high pass equalizer from 125Hz.
Figure 47: High pass equalization on the snares.
As seen above after adding a high pass filter, the lows are faded out. This would further in the mix help in not having muddiness in the low frequencies, especially when the bassline and the sub comes along in the mix.
There is an effect added to the beat. The effect feels like a wind which goes around the head from left to right and back which is on the first kick of every bar, sometimes reduced in volume.
Figure 48: Frequency response of the special effect Kick (raw).
Figure 49: Overdrive added to the kick.
A live audio effect (overdrive) which is an audio distortion plug in id added to the kick.
The range of frequencies to be added distortion can be selected in the X-Y region it can also be narrowed for specific frequency distortion. The drive determines the amount of distortion on the signal and the tone acts as a post distortion equalization tone.
Dynamics control allows the original signal to come through, it controls how much of the original signal and a lesser amount of applied compression to the signal.
Figure 50: The Kick after distortion by overdrive.
As we can see there is more frequency data in the 1kHz to 2kHz region. I would
Figure 51: Resonators are added.
It adds additional frequencies to the already existent frequencies of the kick. As it sounds rather musical, the dry/wet is set to very low, where the effect is very subtle.
Figure 52: Final Eq, lows rolled off
A ping pong delay is added to give the kick an interesting effect and then the lows are rolled off to avoid cluttering of frequencies.
This effect is considered as a part of the main drum beat. Although is mixed in very slightly.
Figure 53: Hi hat frequency response.
Above is the frequency response of the Hi hats.
These are the same hi hats that were recorded in a drum recording session. (session #1). Here again are low frequencies present, also present are some 250Hz.
A High pass filter engaged at around 100Hz would solve the problem and a bit of a bull dip at the 250Hz would reduce the clutters of the frequencies.
Along with these percussions, I also had a shaker playing in and out of the arrangement. As I didn’t think it needed compression or equalization, I just brought it in at a rather low level to bring more highs in the mix.
The song starts with a bassline / synth which gradually has a filter cut off which brings in the higher frequencies.
Figure 54: Bassline at the start of the bar.
Figure 55: Bass line at the end of the bar.
The gradually frequency spikes in the higher region of the frequency response chart can be seen. This adds to the interest of the song, making the rather long progression interesting.
Automation is added in Ableton.
Figure 56: Automation of filter cutoff.
The cut off at the end of the bar goes back to the default value, bringing the syth back to its low frequencies.
Along with this the amount of the sends to reverbs (auxiliary) are also automated so that the amount of reverb increases and decreases with the time as per the arrangement.
The reverb is increased when there is a change or a drop in the arrangement.
Figure 57: Duplicated of bass line via waves GTR.
The bassline is duplicated and sent through the waves GTR plugin. It added a distortion to the bass line and a Lay D which is a stereo, reversal and feedback effect.
The audio was recorded on a new track.
The signal was inserted with a gate and the input was side chained to the output of the kick in the drum group.
So that every time the kick hit, the signal of the distorted bassline would go through the audio signal gate.
Figure 58: Gate and equalization on the distorted bass line.
A high pass equalizer is inserter to avoid and low frequencies and 250Hz region to be in the mix so that the mix does not get and muddiness due to cluttering of frequencies that would happen when everything is summed together in the mix.
The Threshold of the gate is set so that only when the kick is outputted, the gate would open. When there is no kick the signal will not be able to pass the gate and hence there would no occurring of the sound of the distorted bassline.
This gave more emphasis on the kick, making it sound as it were the upper harmonics of the kick although it isn’t it give a pseudo feeling. Makes the kick sound more in the presence region. As the frequencies present after sent through the equalizer are in the higher regions.
Figure 59: Hi- Bass line through GTR by Waves.
The signal is now sent to another channel. It is inserted with the GTR plugin by waves, it is added with a distortion, the settings on the distortion include the distortion level, the low, highs and a frequency select and gain. I decreased the low as the bassline already has those, the amount of distortion is set so that it sounds crunchy, the bas pitcher adds more depth to the signal by changing the pitch of the signal and then adding it to the signal. The chorus effect simply multiplies the instances of the signal along with some phase shifts and timing differences. Giving a deep sounding guitar sound.
Figure 60: Hi- Bass line through waves gtr frequency response.
Above is the frequency response of the signal of the bassline through the GTR plugin. Again as there are many low frequencies in it, I would engage a low shelf this time as I don’t want to get rid of the low frequencies, I would rather have them in low amplitude, so they are noticeable yet not overpowering the overall mix.
A bit of the 250Hz would be taken down.
Figure 61: Hi- Bass line #2 through GTR.
Another copy is sent to GTR, now with different effects. A bass pitcher, an octave and a distortion.
I noticed that the placement of the effects does made a difference in the characteristics of the sound. This sound have an effect like that of a guitar rapidly plucking the strings due to the octave. The distortion gave it the crunch, the bass pitcher the deepness and the roundness in the sound.
Figure 62: Pads. (Cakewalk Z3ta +2)
The vst instrument by Cakewalk Z3ta +2 is used for the pad and the bassline. The modifications done to the preexistent pad in the preset library are done. I changed the filter cutoff to allow the mids and lows to pass through. The amount of the attack, decay and sustain are made to be slow so that the pad sounds smooth and soft.
Another percussion instrument is a milli drum sample.
It is mixed in  at a low level to add more of the highs In the mix, crystallizer by Sound Toys is a vst audio effect plug in which does granular reverse echo slicing and retro pitch processing.
I set the echo to forward, the delay to 0.1 sec a little bit of pitch is added. Most of these are the minimum settings. The effect is mixed in (dry/wet) in a very low level, its barely audible but adds a character to the sound of the milli drums.
An equalizer with a bell cut at 250Hz and a boost at around 10kHz is added to spice up the sample.
The sample is actually a 4 bar loop made up of fusing two 3 bars and 1 bar of another sample of the same instrument. I gives more musicality to the sound.
Figure 63: Sound toys by Crystallizer.
Splice: Splice is the length of the section of the audio which is captured in the plug in and played back. When the splice setting is set to very low, that is

The next percussion is from a Asian instrument percussion sample pack. It sounds a bit like that of an Indian table. This instrument comes in and out of the arrangement.
The Ping-Pong delay is added to the sound, and automation of on and off is inserted at certain points.
Figure 64: Drums group.
As all these tracks were taking up a lot of space in the work area. It was getting complicated and confusing and was coming between my creativity, in terms of arrangement of certain sounds. It was more like a writer’s block. To make things easier, I had to have lesser things on my place. As, the rule says, less is more. I tried to think of grouping the tracks together. Doing the same saved a lot of space and made things look clean.
Color-coding: Coloring different tracks was very essential. File management is something that is often not paid attention to but it’s the little things that distinguish from something that would make a difference to the listener’s perceived end product. Different tracks were given different colors so spotting the desired track would be easy. Grouping would also assist in doing some parallel compression to add more punch to the drums and let them stand out of the mix.
White Noise:
Another element of the song used mostly in the transitions from bars to bars is the white noise.
To create the white noise, an instance of the Operator instrument is created in Live.
Figure 65: Operator.
In this case I didn’t want the output of the operator to go to the master mix, I wanted it to only go to the compressor as a sidechain input.
To do this, I set up the input and output settings of the track and set the output to sends only.
Another track was created and on it was another Operator.
Although instead of a Sine wave the operator was set to white noise. So that every time a key was hit, the operator would output white noise.
Figure 66: White noise in Operator.
Figure 67: Auto filter, reverb.
After that, the operator is added audio effects.
1: Auto Filter.
2: Reverb.
3: Compressor.
The auto filter parameter determines which of the frequencies are passed through the filter with the option of an LFO.
The low-pass filter is automated at the certain parts of the song. Along with the autofilter the reverb parameters are also automated.
So as the filter moves, the reverb would increase or decrease to create interest.
Figure 68: Side chain compressor for white noise.
As we can see the audio from in the side chain compressor is set to receive from the side kick which is where the operator playing the sine wave is. The attack and release were set so as to get a smooth sounding compression.
All these parameters are automated together to create a spacey feeling and to give ambience to the track. It also makes the track sound lush, modern, as this technique is used by many modern day producers. As well as can be noticed in one of my reference track.
Figure 69: Automation of white noise.
The automation track done on the various parameters can be seen above. The amount of white noise is not substantial it is added only pinch full through the whole track. The levels of the output of the operator synth’s white noise is also automated to give a more dynamic characteristic to the sound of the side chained white noise sound.
I used the Behringer BCF 2000 here to control the parameters of the volume in Ableton Live. This gave a better control on the sound rather than having it done by the mouse.

Vocal:

The effects used on the vocals are the FabFilter  Volcano2, The Beat Repeat in live and the PingPong Delay.
Figure 70: FabFilter Volcano.
Fabfilter Volcano 2 is a filter, with modulation. It has 4 filters which can be routed per-channel or mid-side. These filters can be assigned with the modulations with LFO, envelopes.
The use of four multi-mode stereo filters which can be routed anywhere. Every filter can be switched between low-pass, high-pass and band pass with slopes of 12, 24 and 48 dB/octaves.
The routing can be done in stereo, left/right or mid/side. In stereo mode which is the default setting on volcano, both the left and the right channel pass through all the filters. The pan pots can be used to make differences in the cut offs of the frequencies in different channels.
The Left-Right setting allows the signal to go differently through different filters. If 2 filters are present, the left channel will go through 1 filter and the right through the other.
The Mid-Side filtering offers splitting of the signal into Mid and a side signal. This is done using the sum and difference of the left and the right signal. Each of these signals are then sent through different channels.
The XLFO is very much like a classic LFO but it can also be used as an individual glide parameter for every step. The frequency knob sets the time it would take for every cycle of the waveform to complete. Its more like a modulation target, hence it is possible to have one XlFO to modulate the frequency of another XLFO. These can also be synchronized to the tempo of the song in the project, which in this case in our DAW – Ableton Live.
The free running mode allows values from 0.02 to 500Hz. Which is about a cycle length of 50 to 0.002 seconds in time.
The envelope generator generates an ADSR ( attack, decay, sustain and release) envelope when the input signal coming through exceeds the threshold set by the threshold knob in the plugin.
Figure 71: Beat Repeat by Ableton.
Beat repeat is mostly used for live performances, it samples audio going through it and as the same says, it repeats it based on the set parameters.
The grid sets the amount of the audio to be sampled. Interval is the time between the two samples.
The filter, filters the wet signal going out.
Figure 72: Pingpong delay on vocals.
The Ping pong delay is an effect which applies delay to a signal then sends it to different channels ie left and right at different times. Here the wet signal can be filtered. In this case it is at 5.77kHz letting that region to pass to the output.
Voice #2:
The same vocal track is then duplicated and the duplicated copy is processed with AVOX DUO by Antares and an equalizer by Live “the EQ Eight”.
The DUO EVO is a vocal modeling auto-doubler.
Figure 73: Antares Duo Evo.
The DUO EVO doubles the signal and adds delays in the left and right signal along with pitch variation, vocal timbre and vibrato.
Figure 74: EQ 8.
There is a high pass filter engaged on the track as I only wanted the high frequencies to pass from the doubler. The pitch variation was set to about 12 to have a slight differences the two voices.
Then vocal is then duplicated again and sent through the metal distortion from the waves GTR and reversed. Before reversing it, the vocal is sent through a long reverb and then it is reversed and cut at the zero crossing about 1 second before the vocal start so it could fade into the vocal track.
Figure 75: Tablas.
At around 40 seconds, the tablas come in the mix. I did parallel compression on them. One using the waves GTR Pitcher, another with the renaissance Axx.
Later it can also be seen that the same sample can be used as an electric guitar.
Figure 76: Pianos.
The Piano sounds are taken via Reason. Resampled to save CPU processing power.
Figure 77: The Sub.
The sub is made with the use of Operator, as it was having some pops and clicks Making the attack smoother helped have a smooth rounded sub. Then it is sent through the overdrive and a band pass filter in the overdrive distortion unit and mixed in a little bit just to add more character to the sub.

Arrangement:

As a good mix starts with a good arrangement as said in the Mixing engineer’s handbook by Bobby Owsinski, I tried to understand the arrangement and get the most essential elements of the track. In my song the essentials would the synth, the drums,  rhythm, fills.

Mastering:

Gearslutz And More:

For mastering the track I wanted to get more up to date interviews and opinions. Hence, I went to gearslutz and read up on interviews and sticky posts on the same. It seems that many people swear by hardware for the purpose of mastering, although getting there by the means of software is possible. The January 2012 edition of Computer Music Magazine (173) had a feature article on Mastering audio. The article had tips and techniques by Mazen Murad known to master many artists including Duffy, Bjork, Muse and more. Reading the article helped me quite much in getting closer to the references. He mentions that the mastered track should sound good wherever it is played on any medium.
Below is the part of the compact disc project : from conception to manufacturing from Bob Katz’s book Mastering Audio – The art and science:
Conception:
Artist
Producer
A&R
Recording:
Artist
Producer
Engineer
Mix Down:
Artist
Producer
Engineer
Premastering:
Artist
Producer
Mastering engineering
Quality Control:
QC engineer
Auditor
Producer
To manufacturing plant.
In our case, we have done the recording, mix down now what is left is premastering.
Premastering can include things like putting songs in a certain order before they are sent to mastering. This may involve the participation of the producer, the artist, and the mastering engineer as well.
“The perfect mix may need no mastering at all!” – Bob Katz
In my song, what I did is I made fellow classmates listen to the song and take opinions on what they thought about the song. Based on their opinions I would do some changes in the mix prior to mastering.
Figure 78: Stereo widening.
I added the Waves S1 stereo imaging plugin to increase the width in the stereo to make the sound wider in the mix.
Figure 79: Multiband compression.
Multiband compression added, to make it be similar to the references.
The make up gains are set based on the pre compression levels and post compression.
Figure 80: Waves L1 limiter.
The limiter is added to bring up the overall levels. Care is taken to not have attenuation at all levels. I just wanted the peaks to be slightly compressed.
Figure 81: Comparison with references.
Finally I do comparison with my reference tracks to make the overall frequency response to be close.
*NOTE: even though the images in log book show Peak level responses and quicker response times. The spectrums were analyzed at 300ms which is closer to the perceived times of human hearing.

References:

Morrison. M. (1989). The power of music and its influence on international retail brands and shopper behavior.
Adams, B. (1998). The Effect of visual/aural conditions on the emotional response to music, Bulletin of the council for Research in Music Education, Vol. 136, Spring.
Alpert, J. and Alpert, M. (1990). Music Influences on Mood and Purchases Intentions, Psychology and Marketing, Vol.7, No. 2, Summer.
Areni, C. and Kim, D. (1993). The Influence of Background Music on Shopping Behavior, Classical Versus Top-Forty Music in a Wine Store, Advances in Consumer Research, Vol 20.
Baker, J., Grewal, D. and Parasuraman, A. (1994). The Effect of Store Atmosphere on Consumer Quality Perceptions and Store Image, Journal of the Acadamy of Marketing Science.
Bruner, G. (1990). Music, Mood and Marketing, Journal of Marketing, October, pp.94-104. Chernatony, de L.
McDonald, M. (1998). Creating Powerful Brands, Butterworth.
Heineman.
Donavan, R. and Rossiter, J. (1982). Store Atmosphere: An Environmental Psychology.
Gardner, M. (1985). Mood States and Consumer Behaviour: A Critical View, Journal of Consumer Research, Vol. 13, December.
Grayston, D. (1974).  Music While You Work, Industrial Management, Vol. 4, June.
Hussey, J. and Hussey, R. (1997). Business Research: A Practical Guide for Undergraduate and Postgraduate Students, Macmillan.
Marsh, H. (1999). Pop Stars of the Retail World, Marketing, January, 1999.
Milliman, R. (1982). Using Background Music to Affect the Behavior of Supermarket Shoppers, Journal of Marketing, Vol. 46.
Ortiz, J. (1997). The Tao of Music, Gill and Macmillian.
Smith, P. and Curnow, R., 1966, Arousal Hypothesis and the Effects of Music on Purchasing.
Behaviour, Journal of Applied Psychology, Vol. 50, June.
Yalch, R. and Spangenberg, E. (1990). Effects of Store Music on Shopping Behavior, The Journal of Services Marketing, Vol. 4, Summer.
Zikmund, W. (1991). Business Research Methods, Dryden Press.
Hugh Robjohns. (2010). Analogue Warmth. Retrieved on 01 August, 2012 from http://www.soundonsound.com/sos/feb10/articles/analoguewarmth.htm.
Musical MIDI Shoes [Image] (n.d.). Retrieved August 23, 2012, from http://www.instructables.com/id/Musical-MIDI-Shoes/.
Wiimote [Image] (n.d.). Retrieved July 02, 2012, from http://letsmakerobots.com/files/field_primary_image/wiimote.jpg.
Gemini – Discontinued – Replaced by Gemini II / III [Image] (n.d.). Retrieved August 12, 2012, from http://www.seelectronics.com/gemini-valve-mic.
Switchable Studio Microphone
U 87 [Image] (n.d.). Retrieved on July, 03, 2012 from https://www.neumann.com/?lang=en&id=current_microphones&cid=u87_data.
D-112 [Image] (n.d.). Retrieved on July, 05, 2012 from http://www.akg.com/site/products/powerslave,id,261,pid,261,nodeid,2,_language,EN,view,diagram.html.
Owsinski, B. (2004). The Recording Engineer’s Handbook. Artistpro Publishing.
Shure Beta 91A Instrument Microphone. [Image] (n.d.). Retrieved on June 28, 2012 from http://www.shure.com/americas/products/microphones/beta/beta-91a-half-cardioid-condenser-microphone.
The Directional Boundary Microhone. [Image] (2009). Retrieved on July 13, 2012 from http://www.bartlettmics.com/newsletter/newsletter8-09.pdf.
Beta 51A [Image] (n.d.). Retrieved on August 01, 2012 from http://www.shure.com/americas/products/microphones/beta/beta-57a-instrument-microphone.
Shure SM 57 [Image] (n.d.). Retrieved August 09, 2012 from http://www.shure.com/americas/products/microphones/sm/sm57-instrument-microphone.
Shure SM 57_large [Image] (n.d.). Retrieved June 24, 2012 from http://www.shure.com/idc/groups/public/documents/webcontent/rc_img_sm57_large.gif.
Beta 57A_large [Image] (n.d.). Retrieved on June 05, 2012, from http://www.shure.com/idc/groups/public/documents/webcontent/rc_img_beta57a_large.gif.
NT3 [Image] (n.d.). Retrieved August 02, 2012 from http://www.rodemic.com/mics/nt3.
NT3_freq [Image] (n.d.). Retrieved August 01, 2012 from http://media.rodemic.com//images/mics/nt3/nt3_freq.jpg