You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

Sound mixing is a crucial step in post-production, blending audio elements into a cohesive soundtrack. It enhances emotional impact, guides audience attention, and creates realism. In narrative documentaries, sound mixing supports the story, builds atmosphere, and ensures dialogue clarity.

Techniques include equalizing for clarity, for consistency, and for space. Music mixing involves setting levels, equalizing tracks, and sidechaining for dialogue. Sound effects are layered, panned, and given realistic spaces to create immersive soundscapes.

Importance of sound mixing

  • Sound mixing is a crucial step in post-production that involves and blending all the audio elements (dialogue, music, sound effects) into a cohesive and immersive soundtrack
  • A well-mixed soundtrack enhances the emotional impact of the story, guides the audience's attention, and creates a sense of realism and space
  • In narrative documentary production, sound mixing plays a vital role in supporting the story, building atmosphere, and ensuring clarity and intelligibility of the dialogue

Dialogue mixing techniques

Equalizing for clarity

Top images from around the web for Equalizing for clarity
Top images from around the web for Equalizing for clarity
  • Using (EQ) to sculpt the frequency spectrum of dialogue recordings, emphasizing the important speech frequencies (typically 2-4 kHz) for improved clarity
  • Removing unwanted low-frequency rumble, room resonances, or high-frequency hiss to clean up the dialogue
  • Applying high-pass filters to reduce background noise and low-pass filters to soften sibilance or harsh consonants
  • Example: Boosting the 2-4 kHz range to bring out the clarity in a softly spoken interview

Compression for consistency

  • Utilizing dynamic range compression to even out the variations in dialogue levels, making quieter parts more audible and louder parts less overpowering
  • Setting appropriate threshold, ratio, attack, and release times to achieve a natural-sounding compression without pumping artifacts
  • Applying gentle compression to the dialogue bus to glue the various dialogue tracks together and maintain consistency throughout the mix
  • Example: Using a compressor with a 2:1 ratio and a medium attack/release to even out the dynamics of a conversation recorded in a noisy environment

Reverb for space

  • Adding reverb to dialogue to create a sense of space and match the visual environment
  • Choosing appropriate reverb types (room, hall, plate) and adjusting parameters (decay time, pre-delay, early reflections) to simulate realistic spaces
  • Using send/return routing to apply reverb selectively to dialogue tracks, allowing control over the wet/dry balance
  • Example: Applying a short room reverb to dialogue recorded in a small office to match the visual space

Music mixing approaches

Setting appropriate levels

  • Balancing the overall level of the music track relative to the dialogue and sound effects, ensuring that it supports the story without overpowering other elements
  • Riding the music levels dynamically to accommodate changes in the scene's emotional intensity or dialogue delivery
  • Using automation to create smooth level transitions and avoid abrupt changes
  • Example: Lowering the music level during important dialogue exchanges and gradually raising it during montage sequences

Equalizing music tracks

  • Applying EQ to music tracks to carve out frequency space for dialogue and sound effects, preventing frequency masking and clutter
  • Cutting or reducing conflicting frequencies (e.g., low-mids) in the music that may interfere with the clarity of dialogue
  • Boosting or attenuating specific frequency ranges to shape the tonal balance of the music and suit the overall mix aesthetic
  • Example: Applying a low-mid cut around 200-400 Hz to a music track to create space for the dialogue's fundamental frequencies

Sidechaining for dialogue

  • Using sidechain compression to duck the music level automatically when dialogue is present, ensuring dialogue intelligibility
  • Setting up a compressor on the music track with the dialogue track as the sidechain input, triggering the compression when dialogue exceeds a certain threshold
  • Adjusting the sidechain compressor's settings (threshold, ratio, attack, release) to achieve a natural-sounding ducking effect without pumping artifacts
  • Example: Sidechaining a music bed to the narrator's , creating automatic level dips in the music whenever the narrator speaks

Sound effects mixing

Layering multiple effects

  • Combining various sound effect recordings to create rich and detailed soundscapes that enhance the realism and immersion of the scene
  • Selecting complementary sound effects that work together harmoniously without cluttering the mix or masking important elements
  • Adjusting the relative levels, EQ, and of each sound effect layer to create a balanced and cohesive overall sound
  • Example: multiple city ambience recordings (traffic, pedestrians, distant sirens) to create a convincing urban soundscape

Panning for directionality

  • Using stereo panning to position sound effects in the stereo field, creating a sense of directionality and space
  • Matching the panning of sound effects to the visual placement of objects or actions on-screen, reinforcing the connection between audio and visuals
  • Employing subtle panning automation to simulate movement or changes in perspective
  • Example: Panning a car passby effect from left to right to match the vehicle's movement across the screen

Creating realistic spaces

  • Utilizing reverb, delays, and other spatial effects to place sound effects in realistic acoustic environments that match the visual setting
  • Choosing appropriate reverb types and settings to simulate the size, materials, and characteristics of the depicted space
  • Applying distance-based attenuation and filtering to sound effects to create a sense of depth and perspective
  • Example: Adding a long, dense reverb to a gunshot sound effect to simulate the acoustic space of a large cave or canyon

Mixing for emotions

Supporting the story

  • Crafting the sound mix to support and enhance the emotional arc of the story, reinforcing the intended mood and tone of each scene
  • Using music, sound effects, and dialogue processing to create a cohesive emotional experience that aligns with the narrative's goals
  • Making mixing decisions based on the story's needs, prioritizing elements that serve the narrative and emotional impact
  • Example: Emphasizing unsettling sound design elements during a tense, suspenseful scene to heighten the sense of danger or mystery

Building tension vs relief

  • Manipulating the sound mix to create a sense of tension, suspense, or unease during dramatic or conflicting moments in the story
  • Using techniques such as increasing the presence of low-frequency drones, applying dissonant or atonal sound design, or creating a sense of claustrophobia through close-miked recordings
  • Contrasting tense moments with periods of relief or resolution, using more open, spacious, and harmonious sound design to signify a release of tension
  • Example: Gradually building tension in a chase scene by incrementally increasing the intensity and complexity of the sound effects, then abruptly cutting to silence or a calmer soundscape after the resolution

Guiding audience focus

  • Utilizing the sound mix to direct the audience's attention toward key story elements, characters, or emotions
  • Emphasizing or isolating specific sounds or dialogue lines that carry narrative significance, making them stand out in the mix
  • Reducing the prominence of non-essential background sounds or music during crucial moments to minimize distractions and maintain focus on the important audio information
  • Example: Bringing the sound of a ticking clock to the forefront of the mix during a scene where time is of the essence, underlining the urgency of the situation

Mixing workflow tips

Organizing the session

  • Establishing a clear and logical structure for the mixing session, using track folders, color-coding, and naming conventions to keep the project organized
  • Grouping related tracks (e.g., dialogue, music, sound effects) into buses or submixes for easier management and processing
  • Using markers, regions, or labels to identify key sections, cues, or transitions in the timeline
  • Example: Creating separate track folders for interviews, voiceovers, location sound, and archival audio within the dialogue group

Mixing in passes

  • Approaching the mixing process in focused passes, dedicating each pass to a specific aspect of the mix (e.g., dialogue, music, sound effects)
  • Starting with a rough balance pass to establish the overall levels and relationships between elements, then refining each component in subsequent passes
  • Utilizing automation and real-time mixing techniques to fine-tune the mix and create dynamic changes over time
  • Example: Beginning with a dialogue pass to ensure clarity and intelligibility, followed by a music pass to set the emotional tone, and finally a sound effects pass to enhance the realism and immersion

Printing stems for flexibility

  • Rendering separate stems (submixes) of the key audio components (dialogue, music, sound effects) in addition to the final full mix
  • Providing stems to the client or post-production team for flexibility in future revisions, localization, or remixing for different deliverables
  • Ensuring that the stems are properly labeled, time-aligned, and free of processing that may limit their usability in other contexts
  • Example: Delivering a set of stems that includes a cleaned-up dialogue stem, a music stem with separate tracks for score and source music, and a sound effects stem with backgrounds and hard effects

Loudness standards

Broadcast requirements

  • Adhering to the specific loudness requirements set by broadcast networks or regulatory bodies to ensure consistent and compliant audio levels
  • Measuring the integrated loudness (LKFS/LUFS) of the mix over the entire program duration and adjusting the overall level to meet the target value
  • Applying true peak limiting to prevent digital clipping and ensure compatibility with downstream processes
  • Example: Mixing a documentary for television broadcast to an integrated loudness target of -24 LKFS and a true peak limit of -2 dBTP, as per the EBU R128 standard

Online platform specs

  • Following the loudness specifications and best practices recommended by online streaming platforms (e.g., Netflix, YouTube, Vimeo) for optimal playback on various devices
  • Considering the platform's encoding and normalization processes when setting the mix levels and dynamics
  • Providing separate mixes or deliverables tailored to the specific requirements of each platform, if necessary
  • Example: Delivering a mix optimized for YouTube, targeting an integrated loudness of -14 LUFS and a true peak limit of -1 dBTP, as per YouTube's audio guidelines

Metering for compliance

  • Using dedicated loudness metering tools and plugins to measure and monitor the integrated loudness, short-term loudness, momentary loudness, and true peak levels of the mix
  • Calibrating the monitoring setup and metering tools to ensure accurate and reliable readings
  • Regularly checking the loudness metrics throughout the mixing process to maintain compliance with the target specifications
  • Example: Utilizing a loudness meter plugin (such as the Waves WLM Plus) on the main output bus to continuously monitor the integrated loudness and true peak levels of the mix

Common mixing challenges

Inconsistent production sound

  • Dealing with dialogue recordings that vary in quality, level, or tonal balance due to inconsistent production sound or recording techniques
  • Applying corrective EQ, compression, and noise reduction to minimize the differences between dialogue tracks and create a more cohesive sound
  • Utilizing dialogue replacement (ADR) or voiceover to replace problematic or unusable production dialogue
  • Example: Equalizing a dialogue track recorded with a different microphone to match the tonal characteristics of the other dialogue tracks in the scene

Clashing frequencies

  • Identifying and resolving frequency conflicts between different audio elements that can lead to a cluttered or muddy mix
  • Using EQ to carve out dedicated frequency ranges for each element, ensuring that they occupy distinct parts of the frequency spectrum
  • Employing sidechain processing or dynamic EQ to create space for competing elements in the same frequency range
  • Example: Applying a dynamic EQ to the music track to automatically attenuate the low-mid frequencies whenever the dialogue is present, preventing frequency masking

Retaining dialogue intelligibility

  • Ensuring that the dialogue remains clear, intelligible, and at the forefront of the mix, even in the presence of complex music or sound effects
  • Carefully balancing the levels of competing elements and using automation to prioritize dialogue during crucial moments
  • Applying dialogue-specific processing (e.g., EQ, compression, de-essing) to enhance speech clarity and reduce distracting artifacts
  • Example: Using a multiband compressor on the dialogue bus to control sibilance and harshness in the upper frequencies while maintaining the body and presence of the speech

Mixing tools and plugins

Essential equalizers

  • Parametric EQs: Versatile equalizers that allow precise control over the frequency, gain, and bandwidth of each filter band, enabling surgical adjustments to the tonal balance
  • Graphic EQs: Equalizers with fixed frequency bands and slider-based controls, useful for quick and broad tonal shaping
  • High-pass and low-pass filters: Essential tools for removing unwanted low-end rumble or high-frequency noise and defining the frequency boundaries of each element
  • Example: Using a parametric EQ to notch out a specific resonant frequency in a dialogue recording that is causing harshness or feedback

Compressors and limiters

  • Dynamics processors that control the dynamic range of the audio signal, reducing the difference between the loudest and quietest parts
  • Compressors: Used to even out variations in level, add punch and sustain, or glue elements together
  • Limiters: Used to prevent the signal from exceeding a set threshold, protecting against digital clipping and ensuring compliance with loudness standards
  • Example: Applying a gentle bus compressor to the dialogue submix to even out the levels and create a more consistent and cohesive dialogue track

Reverbs and delays

  • Spatial effects that simulate acoustic environments and create a sense of depth, space, and placement in the mix
  • Reverbs: Used to place elements in realistic or stylized spaces, ranging from small rooms to large halls or abstract environments
  • Delays: Used to create echoes, slap-back effects, or rhythmic patterns that enhance the texture and dimensionality of the mix
  • Example: Using a short, bright room reverb on the dialogue to subtly enhance the sense of space and match the visual environment of an interior scene
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary