What is the reason for turning a mono vocal to stereo?
There’s no reason for it. You don’t have to run the vocal and stereo, you can have a mono track and then create ‘oxygen’, whatever, or duplicate the vocal and put plugins on it that way.
Is it necessary to duplicate guitar tracks, pan them left and right and switch phase on them?
The only way you can do that is if you’re checking microphones. For example, if the guitar is multi mic, you shouldn’t pan, duplicate them. Just put them off the center and then flip the phase while they’re in the center.
And then, if you hear the low ends go away, or start to hear some sound you don’t like you can easily figure out that there’s an issue. But all that matters is what you like. If you think that there’s a problem, look at the attack, look at the waveform. If all the waveforms start exactly at the same time, then that is just the face thing. If you see some are differently or later than others, you may want to shift them to all have the same start time then flip the face.
What are the first five things that everybody should do to any vocals to get it to sit well in the mix?
Once you have a consistent level across the board, whatever you do to is going to make sense. As for some very simple vocal processing, you can just take away some rumble with EQ. Get the vocal to be one track, or more organized. After taking out some rumble, add some shine.
The most important thing in the track is how the vocal sounds in the track. Nobody’s going to hear in solo, nobody cares. You should always make all adjustments to anything in context of the music.
So, here are the first five things to do with the vocals:
- Get the vocal to be on one track, to be on the same level with clip gain;
- Consolidate it right;
- Get it to a great level;
- Pick EQ and compressor that make sense;
- And then find an effect.
From this point you can get more complicated with effects but 99% of what’s going to work is already in this song.
How to approach depth and dimension? Is it levels transient content, EQ, reverb delay, or all of it? How to decide what gets reverb delay? And what’s what stays dry?
The more you compress the track, the more hammered it becomes. How can you make dimension when every peak and every value has been flattened out? The more you compress something the less dynamic you’re going to have in the song. To make depth and dimension you should try to look at each part and if you’re going to compress it, try to do that in the least amount.
The whole thing is the balance and the way you get the depth. And the way you get that is to put the effects on things while they’re in the track. One of the secrets to do that is to pan everything as wide as possible and try to think that you only have three pins: left. center and right.
Is there a recommended input level, for example, minus six dB?
There’s no recommended mix level. The highest technology in the world was consoles and outboard gear and then adding electronics to work with it. One day, we’ll be able to take the GUI and make the GUI fill the screen. But right now, we don’t have that technology. Mixing is all about gain structure. If you don’t have your gain structure, set it up in your session.
Leave a Reply