Are Composers Properly Mixed Up?

By • November 6, 2010


This is an interview with freelance recording engineer / scoring mixer / mastering engineer John Rodd, who has recorded, mixed and/or mastered for many film composers including Cliff Martinez and John Frizzell, as well as numerous video game projects including Assassins Creed II (Jesper Kyd), World of Warcraft (Russell Brower and others) and Avatar: The Game (Chance Thomas). In addition to working internationally for over 20 years, John was Orchestral Scoring Recordist at the 20th Century Fox Newman scoring stage from 1997 to 2004.

Thanks for sitting down and doing this with me, John. I appreciate it! There are two main areas I’d like to cover. One is the area of what composers are going to give a guy like you, if they are not going to be the music mixer.

They will be creating a multitrack from any virtual instruments that they have at their studio, plus they’re going to be including recordings of any soloists or ensembles that have been recorded.

Right, now in terms of audio tracks, a traditional composer like me, I don’t think in terms of strings. I think in terms of violin 1, violin 2, viola, cello, and bass. It’s just the way I’m hardwired to think. So, I’m not going to be giving an engineer a string pad. But, would you prefer, let’s say, if I had the mix exactly the way I like it, a stereo string submix or do you prefer to have them separated out?

I strongly prefer that different instruments are not combined onto one audio file. If they are, I have very little control of them in the mix. With that said, I know that every project has a deadline, and a budget, and a schedule, and depending on how the composer has their composing template set up at their studio certain things may have been combined already. I won’t make an issue of it, and I’m not interested in creating a lot of work and a lot of headaches for a composer. But, in a perfect world, I’d like to have violin 1, separate from violin 2, separate from the viola, separate from the cellos and separate from the basses. And I often work with people who have multiple virtual libraries and in these cases I’d like to have the East West Violins separate from the LASS Violins. This gives me a large amount of sonic control on all the different instruments. When I’m working with either a virtual project or a hybrid project, I find that different sample libraries bring to the table their own characteristics. With some libraries I’ll get the ‘wood’, or the rosin, or the attack, or the bow, or the space, or the warmth, or the bloom. I can get different sonic characteristics out of different sample libraries, and I very much like it when composers have multiple sample libraries because then I can get a more realistic, fuller, and richer virtual orchestra sound.

Well, I certainly do have multiple sample libraries and most the guys I know do. But I think most us probably think, and I’m glad to have this conversation, think that you would like our premixed violin 1, if you will, that has the ratio of the Hollywood Strings, with Kirk Hunter’s Strings, in my case, and sometimes LASS, or even Sonivox, or whatever. They’re different and as an orchestrator, I guess once again, and this probably depends on the composer, whether they’re more of a trained guy, or they are more of a record guy who has transitioned, I see that as orchestration.

That’s a very good point, Jay. I’d want all of the relative volumes of the sample libraries to be just as they were composed. So in other words, if you were using LASS in the background just for a bit of fill in the composition, I’d want the relative volumes of the LASS virtual instruments on the multitrack to be at the volume you composed them at. Obviously, the compositional process is very involved with volumes and dynamics. The shaping of the virtual orchestra, the “synthestration” as I call it, is an integral part of making a virtual orchestra sound as good as it can sound. So I’d want all the notes in the volume perspective that the composer has written them in.

Interesting!

Yeah, well, it saves me a lot of time. I don’t need to reinvent the wheel and I’m not a composer. I’m not going to want to have to do all the diminuendos and so forth. That’s in the composition, in the “synthestration” if you will. But as we all know, different sample libraries have different panning and amounts of built-in ambiance. I use a lot of high-end hardware reverb processors to make everything sound as if it’s in the same place, if that’s appropriate for that particular score.

Well, if I’m sending the audio out to be mixed, I turn off the reverbs. I don’t want to paint the engineer into a corner.

Exactly. I wouldn’t want any extra reverb added to the sample libraries. I understand that some sample libraries have built-in ambiance, but I wouldn’t want any extra added. Whatever could be turned off, I’d like to have turned off. This means I have a lot of tracks in the multitrack, but that’s not a problem. I routinely mix well over a hundred tracks of audio per cue. I like having that amount of control for individual panning, EQ, reverb, compression, and so on. This way I can really make things shine.

Here’s a question for you because I’m asked this all the time and frankly I really don’t have a good answer and I’ve tried it different ways. When would you think it’s advisable to use a mono virtual instrument verses a stereo? Now obviously, if it’s a flute, the flute is a monophonic instrument, and yet a lot of people will say, “Yeah but I’m still going to use a stereo virtual instrument because the way it was recorded with the ambiance, it is going to be a stereo ambiance.” And yet in my experience fooling around with this stuff, a mixture of mono and stereo seems to sound better to me than to have everything stereo. Do you have any guidelines of how to approach that?

That’s a difficult question. If I were doing the recording, I’d record a simple instrument, like a solo acoustic guitar, in mono with one microphone. That’s standard engineering technique.

Right. So, would we replicate that with a virtual instrument or not?

I’d really have to say that I evaluate everything on a case-by-case basis.

But do you usually, for instance, if somebody brings you a synthestration and they give you a flute part, will it generally be a mono audio file or will it be a stereo one?

If the sample library is stereo, I’d rather have it stereo. Then I have the option of tucking it in, or panning it, or collapsing it; dealing with it as I think sounds best in that particular instance.

Let’s talk a little about EQ. If composers are going to send you the files rather than mixing it themselves, how much EQ should they be doing? Should they be doing just a little when it really gives it a sound that they feel married to, and then they better be pretty darn sure that they’re not screwing something up for you, or would you say in general that if they’re giving it to you that there’s just no need to EQ period?

Like many things in life there are guidelines and exceptions. I’ll tell you my guidelines first and then I’ll explain the exceptions. Again, this is always a discussion I have with the composer at the beginning of the project. It’s important to sort out the best workflow and the best organization for everybody, to make it easy on everyone. I never want to be a ‘make work’ kind of guy. I’m definitely the ‘easy to get along with’ kind of guy.

Thank you!
(Jay and John laugh.)

So in general, when a composer is producing a multitrack for me to mix from, I don’t want additional reverb, I don’t want their EQ, and I don’t want their compression because I have better sounding hardware EQ’s, reverbs, and compressors pretty much without exception. The exception to that rule is if a composer has created a specific sound, for example, a pad with delays and chorusing; a cool signature sound. I would absolutely want to have that as they have created it.
If the composer thinks I might want to have it without the repeating delay, so I could tap the repeating delays into the surrounds or do something else with it, then they could give me two versions — one with the repeating delays on it and one without. Again, that would be a discussion to have in advance. But generally speaking, I don’t want the composers EQ, reverb, and compression. Another important thing to mention is that in terms of peaks of audio on each audio track on the multi track, I would like to have peaks no more than about two-thirds of the way up.

You’re anticipating me. I was going to ask you about headroom next. I work in Logic and as you know, Logic is a 32 bit floating point app, unlike ProTools which is a fixed point app. In theory in Logic as long as the stereo output isn’t going into the red, everything else can go into the red because the headroom adjusts. My feeling is where this breaks down, which I learned the hard way, was when you start to bring in third party plug ins, because they can be coded differently they can respond to floating point differently. I was always told by engineers, control your level at the source. Make sure that your always trying to stay around -6 at the most. Leave a little headroom. I have become convinced from my experience, because I use more third party stuff probably than I use Logic stuff, that if I turn on pre-fader metering so I see what’s hitting the channel strip and I control the levels throughout the signal chain, it just seems to me that when I mix on my own or I send them out, it sounds better.

To go back about 20 or 25 years to early digital, unfortunately, the converters were 16 bit. In reality, they were more like 14 bit, and they sounded pretty terrible. You needed to get a relatively hot signal onto the digital tape machine to avoid the awful sounding noise floor. Well, fast forward, and everyone these days should be working in 24 bit in their DAW’s. There’s no reason to not be working in 24 bit. And so the theoretical signal to noise floor is enormous, and it’s much bigger than the equipment we’re using. It’s bigger than the microphone pre-amps. So I always suggest that people put the peaks around two-thirds of the way up. I’ve pushed the limits of big ProTools HD rigs with many, many audio tracks. Now, if you had 150 audio tracks in the multitrack and if they’re all very loud, I’ve got to substantially turn every track down just to get a proper stereo or surround mix.

So even in a floating point you would still have this problem? Even though the head room theoretically adjusts?

Yes. If you have 150 loud tracks, you’re going to have to turn them all down somewhat to combine them to a mix. It’s just gain structure. Even if you’re working on an analog console it’s the same thing. And then I end up with faders that are all near the bottom; the faders are all at -15 or so and little tiny movements on the fader make a big change. There is no single good reason to record hot in 24 bit, but there are plenty of good reasons not to.

How much panning do you want the composer to do? I mean like you say, some of the libraries kind of have some pre-panning already.

I would want the panning on the instruments in a virtual orchestra to be as it comes out of the library, and I’ll polish it and finesse it.

Am I correct in assuming that if the composer really knew what he was doing with this stuff, and he bounced everything at the right relative levels, you basically could set your thing up to unity gain, and the balances would almost be mixed by themselves? Much like a good orchestrator? Albert Harris, when I studied with him, used to tell me that the best compliment he ever got from a mixing engineer, I think it was Al Schmitt, “Al, you’ve balanced it so perfectly in the orchestration that there’s nothing for me to do.”

Right, exactly, which is why when the synthestration is good and when I get things in the perspective as the composer left them — albeit, without added reverb, and without their EQ, and so forth — then the mix should sit as they envisioned it. That way I’m improving the overall sound and the mix, and I’m not trying to recreate the composition.

Do you also want to hear a bounce of the stereo mix so you get where the composer was thinking?

Absolutely yes. I always want to get a stereo rough mix, and I always ask the composer, “Do you like your rough mix or do you NOT like your rough mix?” (Jay laughs) so that I know if I’m matching it to start or just using it as a point of reference.

Do you want a tempo track and a click track as well?

If I’m getting audio files that I’m going to bring into ProTools, I’d like to get a MIDI tempo map. That way I can have bars and beats that match their bars and beats, and if I have delays I can sync them to the MIDI tempo track. If there is a PDF of the score handy I’d be happy to look at that as well, as I do read scores.

Let’s move on now to where the composer is in the position where he can’t afford a ‘John Rodd’. I see a lot of Logic projects (what is called a session in ProTools) where there is an EQ on virtually every track. I always have thought of EQ as a problem fixer so if you have to put an EQ on every track, something went awfully wrong. Is that just an old fashioned idea? I’m talking about virtual instruments with a handful of real players.

When I record a real orchestra I use very little EQ, as I’ll have put the right microphone in the right place and I’ll have set up the layout of the orchestra in the best possible way to serve the composer’s vision.

Right, so we’re trying to simulate that with a virtual orchestra and the libraries we use are somewhat EQd, some come with EQ settings that you can choose to use or not use. Give me some guidelines if you will with the virtual instruments and libraries that you are familiar with, how much EQ you think they require.

When there are multiple libraries involved, I will sometimes use corrective EQ to make the libraries match a little more. It really depends on the project and the composer’s intent. On the overall mix buss, I almost always use some high-end hardware EQs to shape and sculpt the sound, give it some more color, air and girth.

Which makes a big difference.

Yes.

There are a whole bunch of convolution reverbs available now, and some guys are doing a lot of fooling around with using multiple reverb instances, like Aliverb and Space Designer in Logic, and they may have for each section 2 reverb instances, one for early reflections and one for tails. I started to try this myself and I have to say, at the end of the day I can’t say that I heard a tremendous amount of difference. I’m not going to say I don’t hear any difference, but for the additional CPU strain and as we say in Yiddish, all the ‘potchke’ involved, I am not hearing a dramatic enough difference to warrant all the focus on this that I read on forums.

Yes. In my experience, reverb is only one part of the equation. Mixing for me is reverb, panning, EQ, and so on, and it’s a lot of little decisions. Hopefully, a hundred or so of these little decisions add up to a great sounding mix. My first port of call are my numerous hardware reverbs, the Bricasti M7 and TC Electronic 6000, and that kind of stuff. It’s difficult to exactly quantify, but I find that my hardware reverbs are about 25% better for what I’m doing than any plug-in reverb that I’ve ever used. I’ll sometimes use Altiverb as an additional layer of reverb to create a specific space, so I definitely do use plug-in reverbs, but I lean heavily on my hardware reverbs.

I guess where I get confused is that I am mostly thinking of a concert hall paradigm and call me crazy, but the string players are in the same hall that the brass players are in. Why would they need a different reverb?

Yeah, when I’m mixing a score, particularly a virtual or a hybrid score, I am generally trying to make things sound more cohesive, to sound more like a real ensemble, especially when dealing with many different sample libraries. I’ll generally use one of my hardware reverbs, either in stereo or surround, as the primary reverb and then add other reverbs to it. For example, sometimes a live human choir won’t work the way I want it to sonically using just the main orchestral reverb, so it needs to have a different flavor of reverb just to fit into the sound of the score. Many projects have many, many disparate sounding prerecorded elements, so I’ll sometimes use one reverb for the snare drum and drum kit, a different reverb for the synth pads, a few different reverbs for the various guitars, and yet another one for some of the drum loops, for example.

And yet, in a real concert hall….?

Not always. And of course, there is no perfect concert hall. (Jay laughs.)

So maybe the correct paradigm is not the concert hall when you’re sitting there but the recording of that performance.

I work on a wide variety of projects, and in all genres of music. I do film scores, video game scores, CDs, the occasional TV show score, and I record, I mix, and I master. So I always consider what the project is. If it’s a film, do I need to be extra concerned about dialog in a certain scene? Are there a lot of sound effects? Is it an action thing or is it a love story? I’m always cognizant of the fact that I have to serve the composer who hired me, but I also need to serve the project. I always reference the picture, the dialog, and the sound effects in a film because my mix might need to be more aggressive overall, might need to be more present, might need to be drier, and certain musical elements of the score might need to be more in your face to work well in the overall context of the film.

Do you think it is useful for a composer who is doing his own mixing to run a score he likes through a Match EQ for analysis and use those settings, not rigidly of course, but as a guideline that you might stick on a 2 buss?

I think the most helpful thing to do is to find a score CD that you like the sound of and that’s in the same musical genre as the score that you are composing. I would bring that cue into your DAW, lower its level a bunch because it’s been mastered and is therefore louder, and then compare back and forth. Obviously, most composers don’t have the production resources available to them that the top Hollywood composers do, but I would listen for things like the punch of the drums, the clarity, the reverb, and the panning. You can learn a lot by A/B-ing back and forth.

Oh, one more EQ question. When you hear engineers talk about EQ, they almost always say, “think subtractively. Listen for the offending frequencies and try to take that out instead of boosting.” But then when I am in the studio with them I seem them boosting all the time. They do a lot of additive EQ, despite what they say when talking to new engineers. They don’t always practice what they preach.

I do both on a daily basis. Different situations call for cutting or boosting. They’re both great and I would always want to be able to use them both.

Do you have any additional tips for composers who have to serve as their own engineers?

Yes. Whenever I give a lecture or workshop, someone always asks “I’m thinking about buying these speakers, and what do you think about these?” The first thing I say is “Well those may or may not be good speakers, but how much acoustic treatment have you done in your listening environment?” Most often the answer is “none.” I would very much encourage composers to do some acoustic treatment in their listening environment, but NOT with foam. Foam is good for protecting hard drives when they’re being shipped somewhere, but foam is not good for acoustic treatment, in my opinion. There are now lots of products that can be purchased relatively inexpensively that you hang in the corners of a room that will tame bass problems, which is the frequency range where people have the most problems.

The music is going to sound better wherever you sit in the room, plus the bass traps can be hung up on the wall with hooks, and if they move to a new living situation, the bass traps can easily be taken with them. Any speaker they put in there is now going to sound better. If they can create a more accurate mixing environment, then they will make better mixing decisions.

What about things like electronic room correction that some companies now have as a plug-in, or part of their speakers. Not much of a believer?

The short answer is that I believe in physics. You MIGHT be able to improve the room a little bit in one position, but what happens when you move two feet back? What happens when you move six feet back to where the film director is sitting? You can’t beat physics, so I don’t believe in those sort of electronic solutions. I believe in bass treatments in the corners, and bass traps, some absorption, and some diffusion throughout the room.

What about the book shelves with the staggered books method?

Diffusion is good, but most rooms have the biggest problem with bass build up against the back wall and a lot of distortion of the linearity of the bass frequencies. As a result, composers have a lot of trouble making decisions about the level of various bass frequencies when they mix, and then their mixes don’t translate well to the outside world.

Well, this has been really interesting and I am sure will be very helpful to a lot of composers. Thanks, John.

You’re very welcome.

Comments

By Paul Henry Smith on November 6th, 2010 at 1:56 pm

Thank you Jay and John for publishing this. I think your “situation-based” approach to mixing makes a lot of sense. (As opposed to blindly following some rules because they are “correct”). And I also love the idea of bringing a pre-existing CD recording directly into the project/session and A/B-ing it. I’m definitely going to try that on my next project.

I work almost exclusively with digital orchestral instruments, so I find it very difficult to get that warm, deep, bass that just seems to round out and fill a mix without the bass actually being loud. I suspect that the way bass behaves in an acoustic orchestral setting is VERY different than having bass come from samples and bused to a reverb or something. I can’t put my finger on it, but I’ve NEVER been able to do as well as I would like with sample-based bass. (Sometimes I’ll enhance with synthesized bass 20db lower and panned to the center, but that only works “OK.”

Another thing I’d recommend you take a look at, if you use Logic, is Hermode Tuning, which will make all 3rds and 5ths in tune (instead of equal tempered), resulting in an amazing blend that just can’t be accomplished with EQ, reverb, panning and gain by themselves. (This only works on the software instrument tracks, not on audio, so it may not be relevant at your stage, John, but for composers baking their audio to send to you, I’d urge you to try Hermode tuning to see if it helps achieve a better blend (especially with brass and strings).

By Rosemary Altshuler on November 6th, 2010 at 5:35 pm

Very interesting article Jay! I”m sure this gave a lot of engineers out there, some food for thought! R. :)

By Jason T Miller on November 8th, 2010 at 10:27 am

As always, great interview, Jay. I’ve had the pleasure of meeting John, and not only is he great at his job, but he’s a very nice guy! Your hard work (and John’s) is much appreciated!

By Mervyn "Funkmaster" Jordan on November 18th, 2010 at 10:58 am

Jay, that was a great interview. John knows what he is talking about when it comes to accepting “audio creations” from various sources. One of the things I will never forget, which was taught to me in recording school,is that you should almost never attach reverd, delays, and the likes to a sound prior to your final mix. All effects used during a recording session should be for listening only. As most experienced engineers know, once it’s on tape, it cannot be removed. Thank you.

By Lewis West on December 4th, 2011 at 4:46 am

A very insightful interview. Thanks for sharing!

Leave a Comment