This page is an overview of the content for this topic; enough for revision or to get you started. For more in depth explanation and diagrams, you can purchase the full course here. The small fee for the full courses supports me keeping the website running.
We hear stereo because of time differences between the left and right signals. This replicates the real world when sounds reach our left and right ears at different times. A stereo image can be generated by recording a source with two microphones; you can also emulate this by panning mix elements or adding ‘stereo effects’, such as short delays.
Stereo is not always preferable to mono; for example, in crowded clubs or large concerts, the speakers could be very spaced out, meaning someone near to the left speaker might not hear anything from the right. Small / cheap speakers (e.g. phones) may not be able to produce stereo that is notably better than mono.
Stereo widening
- Our brains don’t perceive very short delays (a few ms) as echoes or repeats, so when the left or right channel is slightly delayed against the other and both are played back, our brains hear the signal as being wider
- Setting complementary EQ settings on left and right channels or duplicates can create a sense of stereo width
- Stereo reverb can be used to give mono tracks some sense of stereo by placing them in a space
- Stereo imager / widening plugins use a mix of phase, short delays and mid-side processing to increase width
- Stereo widening can also have a negative impact on phase coherence, causing some frequencies to cancel, particularly when summed to mono so it is important to consider the mono compatibility of your mix
- Mid-side processing separates (encodes) the left and right channels into two different types of channels
- Separating the mid / sides enables different processing of each; however, some overlap always remains
- A goniometer gives a visual indication of stereo along with how strong the centre signal is; correlation meters identify when phase cancellation might be occurring.
Mono compatibility
- We hear stereo because of time differences between e.g. two microphones or two channels of audio
- Summing the left and right channels of a stereo mix means collapsing the stereo image into mono
- Timing differences mean that the signals between the left and right have a complex phase relationship
- Therefore, when mono summed, phase cancellation can occur, making the sound weak at some frequencies.
Panning law
- Panning law is all about how we manage the volume of sounds in the left and right channels; in general, signals panned to the ‘centre position’ will sound louder than signals panned hard left or right
- This is because to get something in the centre of the stereo image, you send it to both the left and right channels
- This only matters when something is panned across the stereo field and we want it to seem like it stays at the same level and / or when a track is summed into mono and we want to maintain the balance of the mix components in the centre compared to those panned left and right
- Panning law matters when a source moving across the stereo field and you want the listener to perceive the source as remaining at a constant level as it moves around; if a signal is being panned into a static position, then you will already have made a decision about how loud to mix the signal so panning law doesn’t matter.
The impact of stereo
- Stereo gives a sense of width and space in recording, making music sound more immersive and realistic
- The use of stereo became more commonplace and sophisticated in the 1960s; musicians and producers placed sounds and instruments in specific locations making listening more immersive
- 60s panning can be described as polarised; elements of the mix are panned hard left, centre or hard right
- Recordings often sounded unusual in comparison to modern expectations e.g. the drum kit might be hard left
- More tracks in the 70s meant more isolated layers and more control of the placement of these in the stereo field
- Stereo effects pedals such as phasers and flangers became more common in the 1970s
- Quadraphonic sound was a 70s development that used four audio channels in which speakers are positioned at the four corners of a listening space; it was not commercially successful but sowed the seeds for 5.1 and 7.1
- In the 80s, the rise of digital technology led to an abundance of stereo effects e.g. reverb / delay / modulation
- Early samplers sometimes used mono samples rather than stereo to save space
- The advent of CDs in the 90s meant recordings were remastered, bringing e.g. polarised panning ‘up to date’
- Whilst modern mixes have lots of stereo, panning of individual instruments in the mix is less common
- Mixes are wide, with stereo widening and effects used to create interest in the stereo field, using MS processing
- Several streaming services now offer music in Dolby Atmos / spatial audio formats.